Skip to main content

Full text of "Noam Chomsky Syntactic Structures"

See other formats


Noam Chomsky 

Syntactic 
Structures 





mouton 



A Mouton Classic 



"I had already decided I wanted to be a linguist 
when I discovered this book. But it is unlikely 
that I would have stayed in the field without 
it. It has been the single most inspiring book 
on linguistics in my whole career." 

Henk van Riemsdijk 



ISBN 3-11-017279-8 



www.deGruyter.com 



] 



I 

-I 



Syntactic Structures 




Mouton de Gruyter 
Berlin • New York 



Syntactic Structures 

by 

Noam Chomsky 

Second Edition 

With an Introduction by David W. Lightfoot 



Mouton de Gruyter 
Berlin • New York 2002 



Mouton de Gruyter (formerly Mouton. The Hague) 

is a Division of Walter de Gruyter GmbH & Co. KG, Berlin. 



First edition published in 1957. Various reprints. 



® Printed on acid-free paper which falls within the guidelines 
of the ANSI to ensure permanence and durability. 



ISBN 3-11-017279-8 

Bibliographic information published by Die Deutsche Bibliothek 

Die Deutsche Bibliothek lists this publication in the Deutsche 
Nationalbibliografie; detailed bibliographic data is available in the 
Internet at < http://dnb.ddb.de >. 

© Copyright 1957, 2002 by Walter de Gruyter GmbH & Co. KG, 10785 
Berlin. 

All rights reserved, including those of translation into foreign languages. No 
part of this book may be reproduced in any form or by any means, electronic 
or mechanical, including photocopy, recording, or any information storage and 
retrieval system, without permission in writing from the publisher. 
Printing & Binding: Werner Hildebrand, Berlin. 
Cover design: Sigurd Wendland, Berlin. 
Printed in Germany. 



Introduction* 



Noam Chomsky's Syntactic Structures was the snowball which began 
the avalanche of the modern "cognitive revolution." The cognitive per- 
spective originated in the seventeenth century and now construes mod- 
ern linguistics as part of psychology and human biology. Depending 
on their initial conditions, children grow into adults with various lan- 
guage systems, some variety of Japanese if raised in Tokyo and Cor- 
nish English if raised in the village of Polperro. Linguists seek to de- 
scribe the mental systems that Japanese or Cornish people have, their 
language "organs." These systems are represented somehow in human 
mind/brains, are acquired on exposure to certain kinds of experiences, 
and are used in certain ways during speech comprehension or pro- 
duction and for a variety of purposes: communication, play, affects, 
group identity, etc. Linguists also specify the genetic information, com- 
mon to the species, which permits the growth of mature language or- 
gans in Cornish, Japanese, Dutch, Kinande and Navaho children. 

The snowball has gained bulk and speed along these naturalistic 
lines over the last fifty years. The variety of languages, the develop- 
mental patterns manifested by young children, the ways in which ma- 
ture systems are underdetermincd by childhood experience, have pro- 
vided a wealth of discoveries and of empirical demands for theories to 
meet, opening the prospect for more empirical demands as we begin 
to understand the brain mechanisms that might be involved in under- 
standing and producing speech. This kind of work on the growth of 
an individual's language capacity has influenced people studying other 
aspects of human cognition, where the empirical demands on theories 
are partially different and where it is harder to tease apart the contri- 
butions of nature and nurture. Philosophers, psychologists, neurosci- 
entists and even immunologists (see Jerne 1967 and his 1985 Nobel 
Prize address) have engaged with this work. That is the avalanche and 
it has affected many parts of the cognitive mountainside; Anderson 
and Lightfoot (2002) provides a recent survey of some of the work set 
in motion and Chomsky (2000) gives his current views. 

It is interesting to look back from here on the snowball. Snowballs 
always begin small and few people write books of just 1 18 short pages. 
However, what is striking about this little book is that it contains 



vi 



Introduction by David W. Light foot 



nothing on cognitive representations, nothing on grammars as mental 
systems triggered by childhood exposure to initial linguistic experi- 
ences. Chomsky arrived at some conclusions and developed some lines 
of thought which naturally provoked a radical re-thinking of the status 
of grammatical descriptions, but, to judge from the earliest texts, it 
appears that he did this without any particular concern for cognitive 
representations. 

The best discussion of these early years is the introduction to the 
version of the dissertation which was published in 1975 (The logical 
structure of linguistic theory, henceforth LSLT). There Chomsky, writ- 
ing in 1973, said that there would have been little notice of Syntactic 
Structures in the profession if Robert Lees had not written an extensive 
review, published in Language more or less simultaneously with the 
appearance of the book. But that review touched only briefly on the 
matter of mental representations. Furthermore, Chomsky's judgement 
may be too modest, the book was well-received in a number of re- 
views, and the work had a major impact quickly, notably through 
Chomsky's presentation at the Third Texas Conference in 1958 (pub- 
lished as Chomsky 1962), although not everything went smoothly: the 
dissertation was rejected for publication by the Technology Press of 
MIT and an article was rejected by the journal Word. 

Syntactic Structures itself consisted of lecture notes for undergradu- 
ate classes at MIT, which C. H. van Schooneveld offered to publish 
with Mouton, "a sketchy and informal outline of some of the material 
in LSLT" (Chomsky 1975: 3). So these are the three central texts from 
this period: LSLT, Syntactic Structures, and Lees' review. It is also 
useful to look at the earliest analytical work of the new paradigm: 
Klima (1964), Lees (1960) and (Lees and Klima 1963), for example. 
However, one should keep in mind in addition that Chomsky was 
working on his review of Skinner's acclaimed Verbal behavior (Chom- 
sky 1959); the review was submitted in 1957 and sketched a way of 
looking at psychological behavior quite different from the prevailing 
orthodoxies. 

The book was "part of an attempt to construct a formalized general 
theory of linguistic structure ... by pushing a precise but inadequate 
formulation to an unacceptable conclusion, we can often expose the 
exact source of this inadequacy and, consequently, gain a deeper un- 
derstanding of the linguistic data" (p. 5). This was a foretaste of a 
strategy that Chomsky has pursued throughout his career, always will- 



Introduction by David W. Lightfoot 



vii 



ing to formulate proposals in accurate detail in order to see where 
the weaknesses lie, then reformulating, sometimes in radical fashion, 
moving much snow on the mountainside; one thinks of the filters of 
Chomsky and Lasnik (1977), the indexing conventions in the appendix 
of Chomsky (1980), and the features of Chomsky (1995: ch.4), which 
sought precision over elegance and biological plausibility and then 
gave rise to major reformulations of linguistic theory. The immediate 
goal of the new work was to formulate precise, explicit, "generative" 
accounts, free of intuition-bound notions. 

The fundamental aim in the linguistic analysis of a language L is to sepa- 
rate the grammatical sequences which are the sentences of L from the un- 
grammatical sequences which are not sentences of L. The grammar of L 
will thus be a device that generates all of the grammatical sequences of L 
and none of the ungrammatical ones, (p.13) 1 

Lees and others were impressed with the outline of what they took to 
be a truly scientific perspective, and these were days of much concern 
about the nature of science. Lees viewed the book as 

one of the first serious attempts on the part of a linguist to construct within 
the tradition of theory-construction a comprehensive theory of language 
which may be understood in the same sense that a chemical, biological 
theory is ordinarily understood by experts in those fields. It is not a mere 
reorganization of the data into a new kind of library catalog, nor another 
speculative philosophy about the nature of Man and Language, but rather 
a rigorous explication of our intuitions about language in terms of an overt 
axiom system, the theorems derivable from it, explicit results which may 
be compared with new data and other intuitions, all based plainly on an 
overt theory of the internal structure of languages. (Lees 1957: 377-8) 

Chomsky begins Syntactic Structures, then, by aiming to construct 

a grammar that can be viewed as a device of some sort for producing the 
sentences of the language under analysis. More generally, linguists must be 
concerned with the problem of determining the fundamental underlying 
properties of successful grammars. The ultimate outcome of these investi- 
gations should be a theory of linguistic structure in which the descriptive 
devices utilized in particular grammars are presented and studied ab- 
stractly, with no specific reference to particular languages. One function of 
this theory is to provide a general method for selecting a grammar for each 
language, given a corpus of sentences of this language, (p. 11) 



Vlll 



Introduction by David W. Lightfoot 



The issue of selecting a grammar in this formulation was one for ana- 
lysts comparing theories, not for children. The celebrated discussion in 
chapter 6 about the goals of linguistic theory, the distinction between 
discovery, decision and evaluation procedures, is often cited as a dis- 
cussion about what a child might be expected to do in the process of 
acquiring his/her grammar. However, the text concerns the goals of an 
analyst and combats the structuralist goal of seeking a discovery 
method for grammars, whereby an analyst would follow the prescrip- 
tions of a manual, "mechanical procedures for the discovery of gram- 
mars" (p.55, n.6), and arrive at the correct description of some lan- 
guage. Chomsky argued, in contrast, that it was too ambitious to ex- 
pect such a methodology and that the most realistic goal was to find 
a way of comparing hypotheses for generating a particular corpus of 
data. No talk of children but an effort to thwart the positivist notion 
that one could discover a predefined path to scientific truth (cf. Popper 
1959). "One may arrive at a grammar by intuition, guess-work, all 
sorts of partial methodological hints, reliance on past experience, etc 
... Our ultimate aim is to provide an objective, non-intuitive way to 
evaluate a grammar once presented" (p. 56). 

In particular, there was no reason to expect a discovery method 
whereby a successful phonetic analysis would permit a successful pho- 
nemic analysis, which would allow a good morphological analysis and 
then a good syntactic analysis. 

Once we have disclaimed any intention of finding a practical discovery 
procedure for grammars, certain problems that have been the subject of 
intense methodological controversy simply do not arise. Consider the 
problem of interdependence of levels, (p. 56) 

If units are defined by taxonomic procedures, then they need to be 
constructed on lower levels before higher-level units are constructed 
out of those lower-level units. However, once the goals are restricted 
to achieve an evaluation procedure, one may have independent levels 
of representation without circularity of definitions. Indeed, Chomsky 
argued that analysis at higher levels (of syntax) might influence lower 
(e.g. morphological) levels of analysis, and therefore that work on 
syntax could proceed even though there may be unresolved problems 
of phonemic or morphological analysis (p. 59), perhaps to the advan- 
tage of the phonemic analysis. 



Introduction by David W. Lightfoot 



ix 



This was the major methodological innovation and the claim to a 
genuinely scientific approach was based on the rigor of the formal, 
explicit, generative accounts and on the move away from seeking a 
discovery procedure in favor of an evaluation procedure for rating 
theories. 

Any scientific theory is based on a finite number of observations, and it 
seeks to relate the observed phenomena and to predict new phenomena by 
constructing general laws in terms of hypothetical constructs such as (in 
physics, for example) "mass" and "electron." Similarly, a grammar of Eng- 
lish is based on a finite corpus of utterances (observations), and it will 
contain certain grammatical rules (laws) stated in terms of the particular 
phonemes, phrases, etc., of English (hypothetical constructs), (p.49) 

The technical innovation was to motivate different levels of analysis 
and representation, which were related to each other formally by the 
device of a "transformational rule."' That involved various claims 
about the nature of grammars, that their primitives were indepen- 
dently defined, not a product of more basic semantic, functional or 
notional concepts (chapter 2), that they could not be formulated 
through finite-state Markov processes (chapter 3), and that restricting 
rule schemas to those of phrase structure grammars yielded clumsiness 
and missed insights and elegance which would be facilitated by opera- 
tions relating one level of analysis to another, the so-called trans- 
formations (chapters 4, 5 and 7). 

Chapter 5 offered three arguments for extending the expressive 
power of grammars beyond that of unadorned phrase structure gram- 
mars, one relating to conjunction reduction, another relating to active- 
passive relations. The showcase analysis, however, the part of Syntac- 
tic Structures that excited the author (LSLT, 30-31) and had the 
greatest effect on readers, was the new treatment of English auxiliary 
verbs (section 5.3). Chomsky proposed a simple Auxiliary Transforma- 
tion, later dubbed "affix hopping," whereby an affix like -ing, -en or 
an abstract tense marker could be moved to the immediate right of 
an adjacent verb (29.ii). This ingenious transformation, mapping one 
abstract level of representation into another (not sentences into other 
sentences), avoided hopelessly complex phrase structure rules and 
yielded an elegant account for the distribution of the "periphrastic 
do," which could be characterized now as occurring with "stranded" 
affixes, which had no adjacent verb to hop over (p.62). He observed 



X 



Introduction by David W. Lightfoot 



that "the grammar is materially simplified when we add a transforma- 
tional level, since it is now necessary to provide phrase structure di- 
rectly only for kernel sentences" (p.47). 

Chapter 7, entitled Some transformations in English, extended trans- 
formational analysis to negative, interrogative and other sentence- 
types, yielding further simplifications over pure phrase structure gram- 
mars. The transformations themselves provided evidence for certain 
constituents (p. 83) and the same units recurred in a number of opera- 
tions, suggesting that genuine generalizations were being captured. 
The fundamental aspects of the analysis of auxiliaries have survived 
extensive discussion of almost 50 years. In current formulations a 
central parameter of grammatical variation lies in how verbs may be 
connected to their tense markers, either as a result of a syntactic opera- 
tion raising a verb to a higher functional category containing the tense 
marker (as in French, cf. Emonds 1978) or what is now seen as a 
morphological operation lowering the tense marker on to an adjacent 
verb (as in modern English, cf. Lightfoot 1993, Baker 2002), Chom- 
sky's (1957) Auxiliary Transformation - Lasnik (2000) offers detailed 
discussion of this distinction and its relation to the proposals of Syn- 
tactic Structures. 

Always concerned to formulate as precisely as possible, Chomsky 
pointed out that the analysis entailed an ordering in the application 
of transformational rules (p.44) and a distinction between optional 
and obligatory rules (p.45). That yields precision but also problems, if 
one views grammars as acquired by children exposed only to primary 
data. If two rules must be ordered or if a rule needs to be classified as 
obligatory, then somebody viewing grammars as aspects of mature 
cognition would wonder how that could be triggered in a child. If the 
two rules are misordered or if an obligatory rule applies optionally, 
the grammar would generate non-occurring sentences or structures. 
Those non-occurring sentences constitute the linguist's evidence for the 
ordering or the obligatory character of rules, but that evidence is not 
available to young children. If the sentences don't occur, they can't be 
part of the primary data, not part of what a child experiences, and 
we have a grammar which cannot be triggered and is "unlearnable". 
However, this was not an issue in 1957, when precision was the over- 
riding goal and matters of learnability had not yet been raised explic- 
itly. 

The last substantive chapter of Syntactic Structures deals with syn- 
tax and semantics, a relationship which has been widely misunder- 



Introduction by David W. Lightfoot 



xi 



stood. Chomsky argued that grammars are autonomous and indepen- 
dent of meaning in the sense that their primitives are not defined in 
semantic terms (p. 17). That "should not, however, blind us to the fact 
that there are striking correspondences between the structures and ele- 
ments that are discovered in formal, grammatical analysis and specific 
semantic functions" (p. 101). So the units of syntactic analysis, syntac- 
tic constituents, are almost identical to the units of semantic analysis: 
the ambiguity of / saw the man with a telescope is reflected in two 
syntactic structures, one where a man with a telescope is a constituent 
and one where it is not. The work assumes a use-theory of meaning, that 
grammars are embedded in a broader semiotic theory which uses the 
grammar to determine the meaning and reference of expressions. There 
are striking correspondences between syntactic and semantic proper- 
ties and the study of "the structure of language as an instrument may 
be expected to provide insight into the actual use of language" (p. 103); 
to argue that syntactic primitives are not defined semantically is not 
to deny connections between form and meaning (for discussion, see 
LSLZ 18-23 and Lees 1957: 393-5). 

Syntactic Structures, of course, reflected the ambient intellectual cul- 
ture of the mid-1950s in some ways. Chomsky offered operational defi- 
nitions of well-formed sentences of a kind that a behaviorist psycholo- 
gist could understand: they did not need to be "meaningful" or "signifi- 
cant" in any semantic sense (p. 15), not statistically frequent; they 
could be read with normal intonation contours, recalled readily, and 
learned quickly. He carried over the notion of kernel sentences from 
his teacher Zellig Harris (1951), reformulating the notion as referring 
to sentences which had undergone no optional, only obligatory trans- 
formations (p.45); LSLT(4\ -45) offers detailed discussion of the rela- 
tion between Harris' transformations and Chomsky's early work. In- 
deed, one might argue that Syntactic Structures reflected existing prac- 
tice in its silence on matters of cognition: there is reason to believe that 
structuralists were concerned with matters of cognition and wanted 
analyses which were psychologically plausible, but the concern was 
implicit. 

The methodological innovations have endured, and likewise many 
of the technical proposals. Chomsky (1995) has revived the distinction 
between singulary and generalized transformations, the former affect- 
ing single structures and the latter embedding clauses within other 
structures. That distinction was abandoned after Syntactic Structures, 



Xll 



Introduction by David W. Lightfoot 



replaced in Chomsky (1965) by the principle of the cyclic application 
of rules, affecting most deeply embedded domains first, and then 
working up through less deeply embedded domains sequentially. 

One can identify three phases in work on generative grammar. The 
first phase, initiated by Syntactic Structures and continuing through 
Aspects of the theory of syntax (Chomsky 1965), elaborated the expres- 
sive power of grammars to include different levels of representation 
(Syntactic Structures) and a lexicon (the major technical innovation of 
Chomsky 1965). The second phase, beginning in the 1960s and culmi- 
nating in Government and Binding models, sought to constrain the 
expressive power of derivations, so that operations became very gene- 
ral, along the lines of "Move something," and general principles of the 
theory of grammar ("Universal Grammar" by the 1960s) constrained 
such operations to apply appropriately. The third phase has sought 
substantive economy principles beyond the methodological clippings 
of Ockham's razor, under the Minimalist Program of Chomsky (1995). 
The technical advances have been considerable (far too considerable 
to outline in a brief introduction) and we have learned vast amounts 
about the properties of individual languages, about the developmental 
stages that children go through, about the kind of variation that gram- 
matical parameters permit (Baker 2001 insightfully analogizes the 
parametric approach to work on the periodic table of elements which 
underly all chemical substances). 

These developments have taken place as linguists have taken seri- 
ously the cognitive perspective, viewing grammars as mental systems 
which grow in children on exposure to primary data, subject to the 
prescriptions of a genetically specified theory of grammars which per- 
mits only certain structures and options. And they have taken place 
as linguists have taken their mathematical models seriously. A clear 
example of this was an ad hoc principle which was required with the 
introduction of a distinct Binding Theory (Chomsky 1981). The un- 
grammatically of *They expected that each other would leave was as- 
cribed to Principle A of the new Binding Theory, leaving no account 
for the non-occurrence of *They were expected would leave, featuring a 
displaced they; this had formerly been attributed to the same indexing 
conventions which had excluded the sentence with the reciprocal each 
other. The response was a handful of snow to block the gap, precisely 




shaped and called the RES-NIC, the residue of the earlier Nominative 
Island Constraint, which accounted for the mismoving they. This irreg- 




Introduction by David W. Lighlfoot 



xiii 



ular snowball gave rise, in turn, to its own avalanche, the Empty Cate- 
gory Principle, a condition on the positions from which elements 
might be displaced. That principle yielded many new discoveries about 
a host of languages over the course of the 1980s (see, for example, 
Rizzi 1990). It is not clear whether single ideas can be extrapolated 
from general systems and be awarded a prize for being the most pro- 
ductive, but the ECP would be a candidate and, in any case, illustrates 
the productiveness of taking the models seriously as predictive mecha- 
nisms. 

What is remarkable about Syntactic Structures is how easily its 
claims were translatable into claims about human cognition, as 
Chomsky was to make explicit in his review of Skinner and then, fa- 
mously, in the first chapter of Chomsky (1965). There he redefined the 
field in more fundamental fashion and linked it to work on human 
psychology; from then on, matters of acquisition became central to 
linguistic theorizing. The easy translation is the reason that the little 
book, having no discussion of matters of cognition, is nonetheless 
plausibly seen as the snowball which started it all. The discussion 
about the goals of linguistic theory, for example, were straightfor- 
wardly translated point-for-point into criteria for a theory of language 
acquisition by children: the theory provides an evaluation metric by 
which children rate the success of candidate grammars for the 
purposes of understanding some finite corpus of data embodied in 
their initial linguistic experiences, converging ultimately on the most 
successful grammar. 

Before he wrote the introduction to LSLT, Chomsky had come to 
view grammars as representations of fundamental aspects of the 
knowledge possessed by a speaker-hearer, i.e. as claims about psychol- 
ogy (LSLT, 5). Furthermore, there was a precise analog between the 
methodological claims of Syntactic Structures and LSLT and psycho- 
logical claims about human cognition. 

The construction of a grammar of a language by a linguist is in some 
respects analogous to the acquisition of language by a child. The linguist 
has a corpus of data; the child is presented with unanalyzed data of lang- 
uage use. (LSLT, II) 

The language learner (analogously, the linguist) approaches the problem 
of language acquisition (grammar construction) with a schematism that 
determines in advance the general properties of human language and the 
general properties of the grammars that may be constructed to account for 
linguistic phenomena. (LSLT, 12) 



xiv Introduction by David W. Lightfoot 

We thus have two variants of the fundamental problem of linguistics, as 
it was conceived in this work: under the methodological interpretation, the 
problem is taken to be the justification of grammars; under the psychologi- 
cal interpretation, the problem is to account for language acquisition ... 
Under the methodogical interpretation, the selected grammar is the lin- 
guist's grammar, justified by the theory. Under the psychological inter- 
pretation, it is the speaker-hearer's grammar, chosen by the evaluation pro- 
cedure from among the potential grammars permitted by the theory and 
compatible with the data as represented in terms of the preliminary analy- 
sis. (LSLT, 36) 

The "psychological analog" to the methodological problem of con- 
structing linguistic theory was not discussed in LSLT, but Chomsky 
wrote that it lay in the immediate background of his own thinking: 
"To raise this issue seemed to me, at the time, too audacious" {LSLT, 
35). So, the "realist" position was taken for granted but not discussed, 
because too audacious. Hence the ease of taking the general theory as 
a psychological theory that attempts to characterize the innate human 
"language faculty." 2 

There is plenty of internal evidence that Chomsky was interacting 
at this time with Eric Lenneberg, who was to pursue the biological 
perspective on language earliest and furthest (Lenneberg 1967). How- 
ever, the only explicit discussion of the cognitive/biological perspective 
in our central texts was the final section of the review, where Lees 
raised these issues in a kind of mystified optimism. 

Perhaps the most baffling and certainly in the long run by far the most 
interesting of Chomsky's theories will be found in their cohesions with the 
field of human psychology. Being totally incompetent in this area, I shall 
allude to only one possible consideration, but one which I find extremely 
intriguing. If this theory of grammar which we have been discussing can 
be validated without fundamental changes, then the mechanism which we 
must attribute to human beings to account for their speech behavior has 
all the characteristics of a sophisticated scientific theory. (Lees 1957: 406) 
... If we arc to account adequately for the indubitable fact that a child by 
the age of five or six has somehow reconstructed for himself the theory of 
his language, it would seem that our notions of human learning arc due 
for some considerable sophistication. (Lees 1957: 408) 

And considerable sophistication there has been. Most subsequent work 
on language learning has followed Syntactic Structures in seeking the- 
ories which evaluate grammars relative to a corpus of utterances. That 



Introduction by David W. Lightfoot 



xv 



is true of Chomsky (1965), Chomsky and Halle (1968), and of more 
modern work. Robin Clark's Fitness Metric measures precisely the fit- 
ness of grammars with respect to a set of sentences (1). 



where 

v, = the number of violations signaled by the parser associated with a given 
parameter setting; 

Si = the number of superset settings in the counter; b is a constant superset 
penalty > 1; 

e t = the measure of elegance (= number of nodes) of counter /; c < 1 is a 
scaling factor 

The central idea here is that certain grammars provide a means to 
understand certain sentences and not others; that is, they generate cer- 
tain sentences but not others. The Fitness Metric quantifies the failure 
of grammars to parse sentences, the "violations", v. The sum term, 
sigma, totals all the violations of all grammars under consideration, 
perhaps five grammars with a total of 50 failures or violations. We 
then subtract the violations of any single grammar and divide by the 
total violations (multiplied by n - 1). This provides a number which 
grades candidate grammars. For example, if one candidate has 10 viol- 
ations, its score is 50-10, divided by some number; if another candi- 
date has 20 violations, its score is 50-20, divided by that number, a 
lower score. (There are two other factors involved in the equation, a 
superset penalty s and a measure of elegance e, but they are subject to 
a scaling condition and play a lesser role, which I ignore here.) I have 
sketched Clark's Fitness Metric because it is the most sophisticated 
and precisely worked out evaluation measure that I know. What it and 
other such evaluation measures do is rate grammars against a set of 
data, as outlined in 1957. 

Although most work on learnability has followed the goals laid out 
for linguistic theories in Syntactic Structures, future work may permit 
a different approach to acquisition, providing a kind of discovery pro- 



l. Fitness Metric (Clark 1992) 



{T J=l Vj + bT j=] Sj + cT J=l e^ 



- (v, + bsi + ce,) 




xvi 



Introduction by David W. Light foot 



cedure: children may seek certain elements of grammatical structure, 
cues. These cues would be prescribed at the level of UG in a kind of 
menu from which children select. Some cues represent parameters of 
variation (i.e. occurring in some grammars but not others) and all 
cues would be found in the mental representations which result from 
parsing phrases and sentences that children are exposed to (Dresher 
1999, Fodor 1998, Lightfoot 1999). Some of those phrases and senten- 
ces would "express" cues, requiring the cued structure for understand- 
ing. Under this view, children acquire a mature grammar by accumu- 
lating cued structures and children (and linguists) have a kind of dis- 
covery procedure. 

Nearly fifty years ago Chomsky argued for explicit rigor, for various 
levels of representation provided by a theory of grammar, and for 
seeking a precise evaluation metric to compare grammars. After al- 
most fifty years of enrichment, we can revisit this matter and many 
others in the light of subsequent work developing theories of grammar 
and spelling out the details of Universal Grammar, seen now as defin- 
ing the language faculty. The ongoing investigation of this part of 
human cognition has been immensely productive; we have witnessed 
the "considerable sophistication" that Lees predicted. We have learned 
a vast amount about many languages and we have learned different 
kinds of things: generativists adopting the biological, cognitive per- 
spective have focused on where generalizations break down, because 
that reveals poverty-of-stimulus problems which illustrate information 
conveyed by the linguistic genotype, UG (see Vie Linguistic Review 
19.1, 2002, for extensive discussion of the role of poverty-of-stimulus 
arguments). The avalanche has also given rise to new kinds of studies 
of language acquisition by young children, new experimental tech- 
niques, explorations of language use in various domains, new com- 
puter models, new approaches to language history, all shaped by the 
core notion of a language organ. It has also spawned new approaches 
to old philosophical questions, notions of meaning and reference, and 
Chomsky has taken the lead in this area (for example, Chomsky 2000). 
The cognitive analysis of language was re-energized by this remarkable 
snowball and it will continue for much longer, because much remains 
to be done now that the perspective has been widened. 



Georgetown, July 2002 



David W. Lightfoot 



Introduction by David W. Lightfoot 
Notes 



xvii 



* I am grateful to colleagues who have commented on an earlier draft of this introduc- 
tion: Noam Chomsky, Morris Halle, Norbert Hornstein, and Neil Smith. They are all 
old enough to remember 1957, appearances to the contrary. 

1 Bear in mind that these were notes for undergraduate classes. Students were not famil- 
iar with Markov processes and needed to be introduced to string generating mecha- 
nisms. This material does not feature in LSLT. presumably because it was remote 
from empirical issues. The idea was to show that much richer systems were inadequate 
for natural language and that the goal should be the strong generation of structures, 
there being no grammatical-ungrammatical distinction, as elaborated in LSLT (Chap- 
ter 5) 

2 Linguists who restrict themselves to mathematical descriptions might be said, gener- 
ously, to be following the example of Syntactic structures, hoping that their analyses 
will be as automatically translatable into psychological claims as the ideas of 1957. 
However, they deprive themselves of the wealth of empirical demands and guidance 
which come specifically from the psychological dimension, matters of learnability, etc. 

It is worth noting in the shade of a footnote that linguists pursuing the cognitive 
perspective postulate genetic information needed to solve acquisition problems. By 
discovering more and more about the genetic information required, we hope one day 
to learn something about the appropriate form of that information, a different and 
more ambitious matter. Hypotheses abound but nobody is under any illusion that 
current ideas come anywhere near what will be needed to unify our hypotheses wiih 
what is known about genetic structure (if that is ever possible). It may meanwhile be 
correct to say that certain information is required, but that just means that at this 
stage of understanding wc are doing Mendelian genetics. 



References 

Anderson. S. R. and D. W. Lightfoot 

2002 The language organ: Linguistics as cognitive physiology. Cambridge: 

Cambridge University Press. 

Baker. M. A. 

2001 Vie atoms of language. New York: Basic Books. 

2002 Building and merging, not checking: The nonexistence of (Aux)-S-V-O 
languages. Linguistic Inquiry 33.2: 321-8. 

Chomsky. N. 

1959 Review of B. F. Skinner Verbal Behavior. Language 35: 26-57. 

1962 A transformational approach to syntax. In A. Hill. ed.. Proceedings of 

the Third Texas Conference on Problems of Linguistic Analysis in English. 

Austin: University of Texas Press. 
1965 Aspects of the theory of syntax. Cambridge, MA MIT Press 

1975 The logical structure of linguistic theory. New York: Plenum. 

1980 On binding. Linguistic Inquiry 11.1: 1-46. 

1981 Lectures on government and binding. Dordrecht Foris. 



xviii 



Introduction by David W. Lightfoot 



1995 The Minimalist Program. Cambridge, MA: MIT Press. 

2000 New horizons in the study of language and mind. Cambridge: Cambridge 

University Press. 
Chomsky. N. and M. Halle 

1968 The sound pattern of English. New York: Harper and Row. 

Chomsky, N. and H. Lasnik 

1977 Filters and control. Linguistic Inquiry 8.3: 425 - 504. 
Clark. R. 

1992 The selection of syntactic knowledge. Language Acquisition 2: 83-149. 

Dresher. B. E 

1999 Charting the learning path: Cues to parameter setting. Linguistic Inquiry 

30.1: 27-67. 

Emonds, J. 

1978 The verbal complex V- V in French. Linguistic Inquiry 9: 151 -75. 
Fodor. J D. 

1998 Unambiguous triggers. Linguistic Inquiry 29. 1 : 1 - 36. 

Harris, Z. S. 



Methods in structural linguistics. Chicago: University of Chicago Press. 
Jerne, N. K. 

1967 Antibodies and learning: Selection versus instruction. In G. C. Quarton. 

T. Melnechuk and F. O. Schmitt. eds., 77/e neurosciences: A study pro- 
gram. New York: Rockefeller University Press. 

1985 The generative grammar of the immune system. Science 229: 1057- 
1059. 

Klima, E. S. 

1964 Negation in English. In J. Fodor and J Katz. eds.. The structure of 

language. Englewood Cliffs. NJ: Prentice Hall. 

Lasnik. H. 

2000 Syntactic Structures revisited: Contemporary lectures on classic trans- 

formational theory. Cambridge, MA: MIT Press. 

Lees. R. 

1957 Review of Syntactic Structures. Language 33.3: 375-408. 

1960 The grammar of English nominalizations. Bloomington: Indiana Univer- 

sity Press. 
Lees, R. and E. S. Klima 

1963 Rules for English pronominalization. Language 39: 17-28. 

Lenneberg, E. H. 

1967 The biological foundations of language. New York; John Wiley. 

Lightfoot. D W. 

1993 Why UG needs a learning theory triggering verb movement. In C. 

Jones, ed.. Historical linguistics: Problems and perspectives. London: 
Longman [reprinted in I. Roberts and A. Battye, eds.. Clause structure 
and language change. Oxford: Oxford University Press). 

1999 The development of language: Acquisition, change and evolution. Oxford: 

Blackwell. 

Popper, K. R 

1959 The logic of scientific discovery. London. Hutchinson. 

.L. 

1 990 Relativized minimality. Cambridge, M A : M IT Press. 



PREFACE 



This siudy deals with syntactic structure both in the broad sense 
(as opposed to semantics) and the narrow sense (as opposed to 
phonemics and morphologyl It forms part of an attempt to con- 
struct a formalized general theory of linguistic structure and to 
explore the foundations of such a theory. The search for rigorous 
formulation in linguistics has a much more serious motivation than 
mere concern for logical niceties or the desire to purify well-estab- 
lished methods of linguistic analysis Precisely constructed models 
for linguistic structure can play an important role, both negative 
and positive, in the process of discovery itself. By pushing a precise 
but inadequate formulation to an unacceptable conclusion, we can 
often expose the exact source of this inadequacy and, consequently, 
gain a deeper understanding of the linguistic data. More positively, 
a formalized theory may automatically provide solutions for many 
problems other than those for which it was explicitly designed. 
Obscure and intuition-bound notions can neither lead to absurd 
conclusions nor provide new and correct ones, and hence they fail 
to be useful in two important respects. I think that some of those 
linguists who ha\e questioned the value of precise and technical 
development ot linguistic theory may have failed to recogni/e the 
productive potential in the method of rigorously stating a proposed 
theory and applying it strictly to linguistic material with no attempt 
to avoid unacceptable conclusions by ad hoc adjustments or loose 
formulation. The results reported below were obtained by a 
conscious attempt to follow this course systematically. Since this 
fact may be obscured by the informality of the presentation, it is 
important to emphasize it here. 



6 



PREFACE 



Specifically, wc shall investigate three models for linguistic 
structure and seek to determine their limitations. We shall find that 
a certain very simple communication theoretic model of language 
and a more powerful model that incorporates a large part of what 
is now generally known as "immediate constituent analysis" cannot 
properly serve the purposes of grammatical description. The in- 
vestigation and application of these models brings to light certain 
facts about linguistic structure and exposes several gaps in linguistic 
theory; in particular, a failure to account for such relations between 
sentences as the active-passive relation. We develop a third, 
transformational model for linguistic structure which is more power- 
ful than the immediate constituent model in certain important 
respects and w hich does account for such relations in a natural way. 
When we formulate the theory of transformations carefully and 
apply it freely to English, we find that it provides a good deal of 
insight into a wide range of phenomena beyond those for which it 
was specifically designed. In short, we find that formalization can, 
in fact, perform both the negative and the positive service comment- 
ed on above. 

During the entire period of this research I have had the benefit of 
very frequent and lengthy discussions with Zellig S. Harris. So 
many of his ideas and suggestions are incorporated in the text 
below and in the research on which it is based that I will make no 
attempt to indicate them by special reference. Harris" work on 
transformational structure, which proceeds from a somewhat 
different point of view from that taken below, is developed in 
items 15, 16, and 19 of the bibliography (p. 115). In less obvious 
ways, perhaps, the course of this research has been influenced 
strongly by the work of Nelson Goodman and W. V. Quine. I have 
discussed most of this material at length with Morris Halle, and 
have benefited very greatly from his comments and suggestions. 
Eric Lenneberg, Israel Scheffler, and Yehoshua Bar-Hillel have read 
earlier versions of this manuscript and have made many valuable 
criticisms and suggestions on presentation and content. 

The work on the theory of transformations and the transforma- 
tional structure of English which, though only briefly sketched 



PREFACE 



7 



below, serves as the basis for much of the discussion, was largely 
earned out in 1951 55 while I was a Junior Fellow of the Society of 
Fellows, Harvard University. 1 would like to express my gratitude 
to the Society of Fellows for having provided me with the freedom 
to carry on this research. 

This work was supported in part by the U.S A. Army (Signal 
Corps), the Air Force (Office of Scientific Research, Air Research 
and Development Command), and the Navy (Office of Naval 
Research); and in part by the National Science Foundation and 
the Eastman Kodak Corporation. 

Massachusetts Institute ol Technology, Noam Chomsky 

Department of Modern Languages and 
Research Laboratory of Electronics, 
Cambridge, Mass. 



August 1, 1956. 



TABLE OF CONTENTS 



Introduction to Second Edition by David W. Lightfoot ... v 

Preface - 5 

1. Introduction . ... ... 11 

2. The Independence of Grammar 13 

3. An Elementary Linguistic Theory 18 

4. Phrase Structure 26 

5 Limitations of Phrase Structure Description . . 34 

6 On the Goals of Linguistic Theory . 49 

7. Some Transformations in English 61 

8. The Explanatory Power of Linguistic Theory 85 

9 Syntax and Semantics 92 

10. Summary. ... .106 

11. Appendix I Notations and Terminology 109 

12. Appendix II: Examples of English Phrase Structure and 
Transformational Rules . . . Ill 

Bibliography . . 115 



1 



INTRODUCTION 



Syntax is the study of the principles and processes by which senten- 
ces are constructed m particular languages. Syntactic investigation 
of a gi\en language has as its goal the construction of a grammar 
that can be viewed as a device of some sort for producing the 
sentences of the language under analysis More generally, linguists 
must be concerned with the problem of determining the funda- 
mental underlying properties of successful grammars. The ultimate 
outcome of these investigations should be a theory of linguistic 
structure in which the descriptive devices utilized in particular 
grammars are presented and studied abstractly, with no specific 
reference to particular languages One function of this theory is to 
provide a general method for selecting a grammar for each language, 
given a corpus ot sentences of this language. 

The central notion in linguistic theory is that ot "linguistic level." 
A linguistic level, such as phonemics. morphology, phrase structure, 
is essentially a set of descriptive devices that are made available for 
the construction of grammars, it constitutes a certain method for 
representing utterances. We can determine the adequacy of a 
linguistic theory by developing ngorouslv and precisely the form of 
grammar corresponding to the set of levels contained within this 
theory, and then investigating the possibility of constructing simple 
and revealing grammars of this form for natural languages. We 
shall study several different conceptions of linguistic structure in 
this manner, considering a succession of linguistic levels of increas- 
ing complexity which correspond to more and more powerful 
modes of grammatical description; and we shall attempt to show 
that linguistic theory must contain at least these levels if it is to 



12 INTRODUCTION 

provide, 111 particular, a satisfactory grammar of English, Finally, 
we shall suggest that this purely formal investigation ut" the structure 
of language has certain interesting implications for semantic 
studies. 1 



1 The motivai i^n for the particular on cntation of the research reported here 
is discussed below in § 6. 



2 



THE INDEPENDENCE OF GRAMMAR 



2.1 From now on I will consider a language to be a set (finite or 
infinite) of sentences, each finite in length and constructed out of a 
finite set of elements All natural languages in their spoken or written 
form are languages in this sense, since each natural language has a 
finite number of phonemes (or letters in its alphabet) and each 
sentence is representable as a finite sequence of these phonemes (or 
letters), though there arc infinitely many sentences. Similarly, the 
set < >f 'sentences' of some formalized system of mathematics can be 
considered a language. The fundamental aim in the linguistic 
analysis of a language L is to separate the grammatical sequences 
which are the sentences of L from the imgrammatical sequences 
which are not sentences of L and to stud> the structure of the 
grammatical sequences The grammar of I. will thus be a device 
that generates all of the grammatical sequences of I. and none of the 
ungrammatical ones One way to test the adequacy of a grammar 
proposed for L is to determine whether or not the sequences that it 
generates are actually grammatical, i c , acceptable to a native 
speaker, etc. We can take certain steps towards providing a behav- 
ioral criterion for grammaticalness so that this test of adequacy can 
be carried out For the purposes of this discussion, however, 
suppose that we assume intuitive knowledge of the grammatical 
sentences of English and ask what sort of grammar will be able to 
do the job of producing these in some effective and illuminating 
way. We thus face a familiar task of explication of some intuitive 
concept - in this case, the concept "grammatical in English," and 
more generally, the concept "grammatical." 

Notice that in order to set the aims of grammar significantly it is 
sufficient to assume a partial knowledge of sentences and non- 



14 



SYNTACTIC STRUCTURLS 



sentences. That is, we may assume for this discussion that certain 
sequences of phonemes are definitely sentences, and that certain 
other sequences are definitely non-sentences. In many intermediate 
cases we shall be prepared to let the grammar itself decide, when the 
grammar is set up in the simplest way so that it includes the clear 
sentences and excludes the clear non-sentences 1 his is a familiar 
feature of explication 1 A certain number of clear ca^es, then, will 
provide us with a criterion of adequacy for any particular grammar. 
For a single language, taken in isolation, this provides only a weak 
test of adequacy, since many different grammars may handle the 
clear cases properly. This can be generalized to a very strong con- 
dition, however, if we insist that the clear cases be handled properly 
for each language by grammars all of which are constructed by the 
same method. That is. each grammar is related to the corpus of 
sentences in the language it describes in a way fixed in advance for 
all grammars by a given linguistic theory. Wc then have a very 
strong test of adequacy for a linguistic theory that attemps to give a 
general explanation for the notion "grammatical sentence" in terms 
of ''observed sentence," and for the set of grammars constructed in 
accordance with such a theory. It is furthermore a reasonable 
requirement, since we arc interested not only in particular languages, 
but also in the general nature of Language. There is a great deal 
more that can be said about this crucial topic, but this would take 
us too far afield. Cf. 6. 

2.2 On what basis, do we actually go about separating grammatical 
sequences from ungrammatical sequences? I shall not attempt to 

1 Cf , for example, N Goodman, The structure of appearance (Cambridge. 
1951;, pp. 5 -6. Notice that 10 meet the aims of grammar, given a linguistic 
theory, it is sufficient to have a partial knowledge of the sentences (i e., a 
corpus of the language, since a l.ngu:stic theory will state the relation 
between the set of obsencd sentences and the set of grammatical sentences, 
i.e., .t will define "grammatical sentence" in terms of "observed sentence," 
certa.n proper! ies of the observed sentences, and certain properties of grammars. 
To use Quine's formulation, a linguistic theory will give a general explanation 
for what 'could* he in language on the basis of "what is plus simplicity of the 
laws whereby we describe and extrapolate what is". (W. V Quine, From a 
logical point of view [Cambridge, 1953J, p. 54) Cf. § 6.1. 



THb INDEPENDENCE OF GRAMMAR 



15 



give a complete answer to this question here (cf. §§ 6.7), but I would 
like to point out that several answers that immediately suggest 
themselves could not he correct. First, it is obvious that the set of 
grammatical sentences cannot be identified with any particular 
corpus of utterances obtained by the linguist in his field work. Any 
grammar of a language will project the finite and somewhat acci- 
dental corpus of observed utterances to a set (presumably infinite) 
of grammatical utterances. In this respect, a grammar mirrors the 
behavior of the speaker w ho. on the basis of a finite and accidental 
experience with language, can produce or understand an indefinite 
number of new sentences. Indeed, any explication of the notion 
"grammatical in L" (i.e , any characterization of "grammatical in 
L" in terms of "observed utterance of L") can be thought of as offer- 
ing an explanation for this fundamental aspect of linguistic behavior. 

2.3 Second, the notion "grammatical" cannot be identified with 
"meaningful" or "significant" in any semantic sense. Sentences ( 1) 
and (2) arc equally nonsensical, but any speaker of English will 
recognize that only the former is grammatical. 

(1) Colorless green ideas sleep furiously. 

(2) Furiously sleep ideas green colorless. 

Similarly, there is no semantic reason to prefer (3) to (5) or (4) 
to (6V but only (3) and (4) are grammatical sentences of English. 

(3) ha\e you a book on modern music? 

(4) the book seems interesting. 

i>) read you a book on modern music? 
(6) the child seems sleeping. 

Such examples suggest that any search for a seinantieally based 
definition of "gramma ticalness" will be futile. We shall see, in fact, 
in § 7, that there arc deep structural reasons for distinguishing (3) 
and (4) from (5) and (6). but before we are able to find an explana- 
tion for such facts as these we shall have to carry the theory of 
syntactic structure a good deal beyond its familiar limits. 

2.4 Third, the notion "grammatical in English" cannot be identi- 



16 



SYNTACTIC STRl.CTURCS 



fied in any way with the notion "high order of statistical approxi- 
mation to English." It is fair to assume thai neither sentence (1) nor 
(2) (nor indeed any part of these sentences) iias ever occurred in an 
English discourse Hence, in tnv statistical model f.»r grammatical- 
ness. these sentences will be ruied out on identical grounds as 
cqudJy 'remote' trom English. Yet (I), though nonsensical, is 
grammatical, while (2) is not Presented with these sentences, a 
speaker of English will read (I ) with a normal sentence intonation, 
but he will read (2) with a falhng intonation on each word; in fact, 
with just the intonation pattern given to any sequence of unrelated 
words. He treats each word in (2) as a separate phrase. Similarly, 
he will be able to recall (1) much more easily than (2). to learn it 
much more qu'ckly, etc. Yet he may never have heard or seen any 
pair of words from these sentences joined m actual discourse. To 
choose another example, in the context "I saw a frjgilc ,'" the 
words "whale" and "ol " may have equal (i e . zero) frequency in the 
past linguistic experience of a speaker who will immediately recog- 
nize that one of these subst tutions. but not the other, gives a gram- 
matical sentence We cannot, of course, appeal to the fact that sen- 
tences such as (1 1 "might* be uttered in some sufficiently far-fetched 
context, w hile(2) would never be, si nee the basi s for thisdiffcrentiation 
between (I) and (2) is precisel) what we arc interested m determining. 

Evidently, one's ability to produce and recognize grammatical 
utterances is not based on notions of statistical approximation and 
the like The cust.>m of calling grammatical sentences those that 
"can occur", or those that are "possible", has been responsible for 
some confusion here It is natural to understand "possible" as 
meaning "highly probable* and to assume that the linguist's sharp 
distinction between grammatical and ungrammatical 2 is motivated 
by a feeling that since the 'reality' of language is too complex to be 
described completely, he must content himself with a schematized 

2 13cl 'W we shall suggest that t Ins s liarp distinction may be nudilied in favor 
of a notion of levels of grammaticaines*. But ihis has no bearing on the point 
at >Sbue here. Thus (1 > and «2> will beat d'fiercnt levcii of gran-inaticalncbS even 
if 1 is assigned a lvv.er degree uf gramn atica ncss than, say, (3> and 14), but 
they w ill be at the same .evel i f ati itical rem. 'teness from Engl sh The same s 
true of an indefinite number of similar pairs 



THK 1NDEPFNDLNCE OF GRAMMAR 



17 



version replacing "zero probability, and all extremely low probabi- 
lities, by impossible, and all higher probabilities by possible.*** We 
see. however, that this idea is quite incorrect, and that a structural 
analysis cannot be understood as a schematic summary developed 
by sharpening the blurred edges in the full statistical picture. If we 
rank the sequences of a given length in order of statistical approxi- 
mation to English, we will find both grammatical and ungrammatio 
al sequences scattered throughout the list; there appears to be no 
particular relation between order of approximation and grammati- 
calness. Despite the undeniable interest and importance of semantic 
and statistical studies of language, they appear to ha\e no direct 
relevance lo the problem of determining or characterizing the set of 
grammatical utterances. I think that we are forced to conclude that 
grammar is autonomous and independent of meaning, and that 
probabilistic models give no particular insight into some of the 
basic problems of syntactic structure. 4 

3 C. F. Hocketl, A manual of phonology I Baltimore, 1955-, p. 10 

4 We return to the question of the relation between semantics and sjntax in 
S§ 8, 9, where we argue that this relation can only be studied after the syntactic 
structure has been determined on independent grounds. 1 think that much the 
same thing is true of the relation between s>ntactic and statistical studies of 
language. Given the grammar of a language, one can study the use of the 
language statistically in various ways; and the development of probabilistic 
models for the use of language ias distinct from the syntactic structure of 
language) can be quite rewarding. Cf. B. Mandelbrot, "Structure formellc des 
textes et communication- deux etudes," Word 10.1-27 (1954); H. A. Simon, 
"On a class of skew distribution functions," Biometnka 42.425-40 (1955). 

One might seek to develop a more elaborate relation between stall itical and 
syntactic structure than the simple order of approximation model we have 
rejected. I would certainly not care to argue that any such relation is unthink- 
able, but 1 know of no suggestion to this effect that does not have obvious flaws. 
Notice, in particular, that for any n, we can find a string whose first n words may 
occur as the beginning of a grammatical sentence Si and whose last n words may 
occur as the ending of some grammatical sentence S 2 , but where S t must be 
distinct from S 2 . f"or example, consider the sequences of the form "the man 
who . are here," where . may be a verb phrase of arbitrary length Notxe 
also that we can have new but perfect'y grammatical sequences of word classes, 
e g., a sequence of adjectives longer than any ever before produced in the 
context "I saw a - house." Various attempts to explain the grammaiicaf- 
ungrammatical distinction, as in the case of (1 ■, (2), on the basis of frequency of 
sentence type, order of approximation of word class sequences, etc., will run 
afoul of numerous facts like these 



3 



AN ELEMENTARY LINGUISTIC THEORY 



3.1 Assuming the set of grammatical sentences of English to be 
given, we now ask what sort of device can produce this set (equi- 
valently, what sort of theory gives an adequate account of the 
structure of this set of utterances). We can think of each sentence 
of this set as a sequence of phonemes of finite length. A language is 
an enormously involved system, and it is quite obvious that any 
attempt to present directly the set of grammatical phoneme sequen- 
ces would lead to a grammar so complex that it would be practically 
useless. For this reason (among others), linguistic description 
proceeds in terms of a system of "levels of representations.** 
Instead of stating the phonemic structure of sentences directly, the 
linguist sets up such 'higher level' elements as morphemes, and 
states separately the morphemic structure of sentences and the 
phonemic structure of morphemes. It can easily be seen that the 
joint description of these two levels will be much simpler than a 
direct description of the phonemic structure of sentences. 

Let us now consider various ways of describing the morphemic 
structure of sentences. We ask what sort of grammar is necessary to 
generate all the sequences of morphemes (or words) that constitute 
grammatical English sentences, and only these. 

One requirement that a grammar must certainly meet is that it be 
finite. Hence the grammar cannot simply be a list of all morpheme 
(or word) sequences, since there are infinitely many of these. A 
familiar communication theoretic model for language suggests a 
way out of this difficulty. Suppose that we have a machine that can 
be in any one of a finite number of different internal states, and 
suppose that this machine switches from one state to another by 



AN ELEMENTARY LINGUISTIC THEORY 



19 



producing a certain symbol (let us say, an English word). One of 
these states is an initial state; another is a final state. Suppose thai 
the machine begins in the initial state, runs through a sequence of 
states (producing a word with each transition), and ends in the final 
state. Then we call the sequence of words that has been produced a 
"sentence". Each such machine thus defines a certain language; 
namely, the set of sentences that can be produced in this way. Any 
language that can be produced by a machine of this sort we call a 
finite state language; and we can call the machine itself a finite state 
grammar. A finite state grammar can be represented graphically in 
the form of a "state diagram" 1 For example, the grammar that 
produces just the two sentences "the man comes" and "the men 
come" can be represented by the following state diagram: 



(7) 




We can extend this grammar to produce an infinite number of sen- 
tences by adding closed loops. Thus the finite grammar of the 
subpart of English containing the above sentences in addition to 
"the old man comes", "the old old man comes", "the old men 
come", "the old old men come", can be represented by the 
following state diagram: 




1 C. E. Shannon and W. Weaver, The mathematical theory of communication 
(Urbana, 1949), pp. 15f. 



20 



SYNTACTIC STRUCTURES 



Given a state diagram, we produce a sentence by tracing a path from 
the initial point on the left to the final point on the right, always 
proceeding in the direction of the arrow s. Having reached a certain 
point in the diagram, we can proceed along any path leading from 
this point, whether or not this path has been traversed before in 
constructing the sentence in question. Each node in such a diagram 
thus corresponds to a state of the machine. We can allow transition 
from one state to another in several ways, and we can have any 
number of closed loops of any length. The machines that produce 
languages in this manner are known mathematically as "finite state 
Markov processes." To complete this elementary communication 
theoretic model for language, we assign a probability to each 
transition from state to state. We can then calculate the "uncer- 
tainty" associated with each state and we can define the "information 
content" of the language as the average uncertainty, weighted by 
the probability of being in the associated states. Since we are 
studying grammatical, not statistical structure of language here, this 
generalization does not concern us. 

This conception of language is an extremely powerful and general 
one. If we can adopt it, we can view the speaker as being essentially 
a machine of the type considered. In producing a sentence, the 
speaker begins in the initial state, produces the first word of the 
sentence, thereby switching into a second state which limits the 
choice of the second word, etc. Each state through which he passes 
represents the grammatical restrictions that limit the choice of the 
next word at this point in the utterance. 2 

In view of the generality of this conception of language, and its 
utility in such related disciplines as communication theory, it is 
important to inquire into the consequences of adopting this point of 
view in the syntactic study of some language such as English or a 
formalized system of mathematics. Any attempt to construct a 
finite state grammar for English runs into serious difficulties and 
complications at the very outset, as the reader can easily convince 
himself. However, it is unnecessary to attempt to show this by 

2 This is essentially the model of language that Hockett develops in A manual 
of phonology (Baltimore, 1955), 02. 



AN ELEMENTARY LINGUISTIC THEORY 



21 



example, in view of the followingmore general remark about English : 
(9) English is not a finite state language. 

That is, it is impossible, not just difficult, to construct a device of the 
type described above (a diagram such as (7) or (8)) which will 
produce all and only the grammatical sentences of English. To 
demonstrate (9) it is necessary to define the syntactic properties of 
English more precisely. We shall proceed to describe certain 
syntactic properties of English which indicate that, under any 
reasonable delimitation of the set of sentences of the language. 

(9) can be regarded as a theorem concerning English. To go back to 
the question asked m the second paragraph of § 3, (9) asserts that it 
is not possible to state the morphemic structure of sentences 
directly by means of some such device as a state diagram, and that 
the Markov process conception of language outlined above cannot 
be accepted, at least for the purposes of grammar. 

3.2 A language is defined by giving its 'alphabet' (i.e. , the finite set 
of symbols out of which its sentences are constructed) and its 
grammatical sentences. Before investigating English directly, let us 
consider several languages whose alphabets contain just the letters 
a, b. and whose sentences are as defined in (IOi nij: 

(10) (i) ab, aabb, aaabbb, , and in general, all sentences con- 

sisting of n occurrences of a followed by n occurrences of 
b and only these; 
(u) aa, bb f abba, baab, aaaa. bbbb, aabbaa, abbbba , and 
in general, all sentences consisting of a string X followed 
by the 'mirror image* of X (i c , X in reverse), and only 
these; 

(iii) aa, bb, abab, baba, aaaa, bbbb, aabaab, abbabb, and in 
general, all sentences consisting of a string A' of a % s and A's 
followed by the identical string X, and onlj these. 

We can easily show that each of these three languages is not a finite 
state language. Similarly, languages such as (10) where the a's and 
6's in question are not consecutive, but are embedded in other 



22 



SYNTACTIC STRUCTURES 



strings, will fail to be finite state languages under quite general 
conditions. 3 

But it is clear that there are subparts of English with the basic 
form of (10i) and (IO11) Let S,, S 2i S it ... be declarative sentences 
in English. Then we can have such English sentences as: 

(11) (1) If S„ then S 2 . 
(ii) Either S 3 , or S 4 . 

(ni) The man who said that .S" 5 , is arriving today. 

In (Hi), we cannot have "or" in place of "then"; in (llii), we 
cannot have "then" in place of "or", in (lliii), we cannot have 
"are" instead of '"is". In each of these cases there is a dependency 
between words on opposite sides of the comma (i.e., "if" "then", 
"either" -"or", "man"-"is"). But between the interdependent 
words, in each case, we can insert a declarative sentence S ly S 3 , S 5 , 
and this declarative sentence may in fact be one of (1 1 i-iii). Thus if 
in (Hi) we take S, as (lln) and S 3 as (llni), we will ha\e the 
sentence: 

(12) if, either (llni), or 6' 4 , then S 2 , 

and S s in (1 1 iii) may again be one of the sentences of (1 1). It is clear, 
then, that in English we can lind a sequence a + S x + b, where there 
is a dependency between a and b, and we can select as S another 
sequence containing r + S 2 + 4 where there is a dependency between 
c and d, then select as S z another sequence of this form, etc. A set 
of sentences that is constructed in this way (and we see from (1 1) 
that there are several possibilities available for such construction — 
(1 1) comes nowhere near exhausting these possibilities) will ha\c all 
of the mirror image properties of ( lOii) which exclude ( lOu) from the 
set of finite state languages. Thus we can find various kinds of non- 

3 Sec my "Three models for the description of language," LR.E. Transactions 
on Information Theory % \.>1. IT-2, Proceedings of the symposium on information 
theory, Sept., 1956, for a statement of such conditions and a proof of f9i 
Not:ce in particular that the set of wcl'.-formed formulas of any formalized 
system of mathematics or logi^ wii! fail to constitute a tinue state language- 
because of paired parentheses or equivalent restrictions. 



AN FI KMtNTARY LINGUISTIC TUFURY 



23 



tinitc state models within Knghsh. This is a rough indication of the 
lines along which a rigorous proof of (9) can be given, on the 
assumption that such sentences as (1 1) and (12) belong to l-nglish, 
while sentences that contradict the cited dependencies of (1 1) (e.g., 
"either S , then S 2 " etc.) do not belong to English. Note that many 
of the sentences of the form (12), etc., will be quite strange and 
unusual (they can often be made less strange by replacing "if" by 
"whenever", "on the assumption thai", "if it is the case that", etc., 
without changing the substance of our remarks). But they are all 
grammatical sentences, formed by processes of sentence construc- 
tion so simple and elementary that even the most rudimentary 
English grammar would contain them. They can be understood, 
and we can even state quite simply the conditions under which they 
can be true It is difficult to conceive of any possible motivation for 
excluding them from the set of grammatical English sentences. 
Hence it seems quite clear that no theory of linguistic structure based 
exclusively on Markov process models and the like, will be able to 
explain or account for the ability of a speaker of English to produce 
and understand new utterances, while he rejects other new sequences 
as not belonging to the language. 

3.3 We might arbitrarily decree that such processes of sentence 
formation in English as those we are discussing cannot be earned 
out more than n times, for some iixed n This would of course make 
English a finite state language, as, for example, would a limitation 
of English sentences to length of less than a million words. Such 
arbitrary limitations serve no useful purpose, however. The point 
is that there are processes of sentence formation that finite slate 
grammars are intrinsically not equipped to handle If these pro- 
cesses have no finite limit, we can prove the literal inapplicability of 
this elementary theory. If the processes have a limit, then the 
construction of a finite state grammar will not be literall) out of the 
question, since it will be possible to list the sentences, and a list is 
essentially a trivial finite state grammar. But this grammar wili be 
so complex that it will be of little use or interest In general, the 
assumption that languages are infinite is made in order to simplify 



24 



SYNTACTIC SFRIXTI'RI-S 



the description of these languages. If a grammar doc* not have 
recursive devices (closed loops, as in (oj, in the finite state Grammar) 
it will be prohibitively complex. IF it does have recursive devices of 
some sort, it will produce infinitely many sentences. 

In short, the approach to the analysis of grammaticalness suggest- 
ed here in terms of a finite state Markov process that produces 
sentences from left to right, appears to lead to a dead end just as 
surely as the proposals rejected m $ 2. If a grammar of this type 
produces all English sentences, it will produce many non-sentences 
as well. If it produces only English sentences, we can be sure that 
there will be an infinite number of true sentences, false sentences, 
reasonable questions, etc., which it simply will not produce. 

The conception of grammar which has just been rejected repre- 
sents in a way the minimal linguistic theory that merits serious 
consideration. A finite state grammar is the simplest type of 
grammar which, with a finite amount of apparatus, can generate an 
infinite number of sentences We have seen that such a limited 
linguistic theory is not adequate, we are forced to search for some 
more powerful type of grammar and some more 'abstract' form of 
linguistic theory. The notion of '"linguistic level of representation" 
put forth at the outset of this section must be modified and elaborat- 
ed. At least one linguistic level cannot have this simple structure. 
That is, on some level, it will not be the case that each sentence is 
represented simply as a finite sequence of elements of some sort, 
generated from left to right by some Mmplc device Alternatively, 
we must give up the hope of finding a Jmite set of levels, ordered 
from high to low, so constructed that we can generate all utterances 
by stating the permitted sequences of highest level elements, the 
constituency of each highest level element in terms of elements of 
the second level, etc., finally stating the phonemic constituency of 
elements of the next-to-lowest level. 4 At the outset of § 3, we 

4 A third alternative would be to retain the m-tion of a linguistic level as a 
simple linear method of representation, but to generate at least one such Level 
from left to right by a device with more capacity than a finite state Markov 
process. There are so many difficulties wiih the notion of linguistic level based 
on left to right generation, both in terms of complexity of description and lack 



j 



AN ELEMENTARY LINGUISTIC THEORY 



25 



proposed that levels be established in this way in order to simplify 
the description of the set of grammatical phoneme sequences. If a 
language can be described in an elementary, left-to-right manner in 
terms of a single level (i.e., if it is a finite state language) then this 
description may indeed be simplified by construction of such higher 
levels; but to generate non-finite state languages such as English v\e 
need fundamentally different methods, and a more general concept 
of "linguistic level". 



of explanatory power fcf. § 8), that it seems po.ntless to pursue th.s approach 
any further. The grammars that we discuss below that do not generate from 
left to right also correspond to processes less elementary than finite state Markov 
processes- But they are perhaps less powerful than the kind of device that 
would be required for direct left-to-right generation of English Cf. my "Three 
models for the description of language" for some futher discussion. 



4 



PHRASE STRUCTURE 



4.1 Customarily, linguistic description on the syntactic level is 
formulated in terms of constituent analysis (parsing). We now ask 
what form of grammar is presupposed by description of this sort. 
Wc find that the new form of grammar is essentially more powerful 
than the finite state model rejected above, and that the associated 
concept of "linguistic level" is different in fundamental respects. 

As a simple example of the new form for grammars associated 
with constituent analysis, consider the following: 

(13) (i) Sentence -» NP + VP 

(u) \'P-*T+N 

(iii) VP-* Verb + NP 

(tv) T->t/te 

(v) S'-*man y bail, etc. 

(vi) Verb-* hit, took. etc. 

Suppose that vve interpret each rule X~* Kof (13) as the instruction 
"re .v rite V as >'". VSe shall call (14) a derivation of the sentence 
"the man hit the ball." where the numbers at the right of each line 
of the derivation refer to the rule of the "grammar" (13) used in 
constructing that line from the preceding line. 1 

1 The numbered rules of Engl sh grammar to which reference will constantly 
be made .n the following pages are collected and properly ordered in § 12, 
Appendix II. The notational conventions that we shall use throughout the 
discussion of Engl.sh structure are stated in § 11, Appendix I. 

In his *' \xiomatic s>ntax. the construction and evaluation of a syntactic 
calculus," Language 31 409 14 (1955j, Harwood describes a system of word 
class analyses similar in form to the system developed below for phrase structure. 
The system he describes would be concerned only with the relation between 
T >- N V erb i T \ and the man I- hit } the ball in the example discussed 



PHRASE STRUCTURE 



27 



(14) Sentence 



NP + VP 


(0 


T V N + VP 


(>>) 


T + N - Verb + NP 


(in) 


the + A' + Verb + .VP 


(iv) 


///e + man 4- + NP 


(v) 


the 4- mow + hit + ^P 


(VI) 


the + man -V hit + T + N 


(iO 


the + man -+■ /i/7 + fne -f- A" 


(iv) 


Me + man + /;/"/ t- /we + ball 


(v) 



Thus the second line of (14) is formed from the first line by rewriting 
Sentence as NP + VP in accordance with rule (i) of (13); the third 
line is formed from the second by rewriting NP as T f .V in accord- 
ance with rule (li) of (13); etc. We can represent the derivation (14) 
in an obvious way by means of the following diagram: 

(15) Sentence 



NP 
I 

I 1 

T A 



tht man 



The diagram (15) conveys less information than the derivation (14), 
since it does not tell us in what order the rules v\ere applied in (14). 

in (13M15), i e., the grammar would contain the "initial sti mg" T V Verb -t- 
T V and such rules as (13iv~u . It would thus be a weaker sys'em than the 
elementary theory d scussed in § 3, s nee it could not generate an infinite lan- 
guage with a fin.tc grammar. While Harwood's formal account ipp. 40V Uj 
deals only w th word dass analysis, the linguist c apph :alion i p 412 is a case of 
immediate constituent analysis, with the c'asses c", m presumably taken to be 
classes of word sequences This extended application is not quite coinpat ble 
with the formal account however. For example, none of the proposed measures 
of goodness of fit can stand without revision under this rcinterpretation of the 
formalism. 



VP 



lerh NP 



f 



hit T .\ 

I I 

the hall 



28 



SYNTACTIC STRUCTURES 



Given (14), we can construct (15) uniquely, but not vice versa, since 
it is possible to construct a derivation that reduces to (15) with a 
different order of application of the rules. The diagram (15) retains 
just what is essential in (14) for the determination of the phrase 
structure (constituent analysis) of the derived sentence "the man 
hit the ball." A sequence of words of this sentence is a constituent 
of type Z if we can trace this sequence back to a single point of 
origin in (15), and this point of origin is labelled Z. Thus "hit the 
ball" can be traced back to VP in (15); hence "hit the ball" is a VP 
in the derived sentence. But ' man hit" cannot be traced back to 
any single point of origin in (15); hence '"man hit" is not a con- 
stituent at all. 

We say that two derivations are equivalent if they reduce to the 
same diagram of the form (15). Occasionally, a grammar may 
permit us to construct nonequivalent derivations for a given sen- 
tence. Under these circumstances, we say that we have a case of 
"constructional homonymity", 2 and if our grammar is correct, this 
sentence of the language should be ambiguous. We return to the 
important notion of constructional homonymity below. 

One generalization of (13) is clearly necessary. We must be able 
to limit application of a rule to a certain context. Thus 7 can be 
rewritten a if the following noun is singular, but not if it is plural, 
similarly. Verb can be rewritten "hits'* if the preceding noun is man. 
but not if it is men. In general, if we wish to limit the rewriting of 
X as >' to the context Z — W, we can state in the grammar the rule 

(16) Z + X-i W->Z+ Y+ W. 

For example, in the case of singular and plural verbs, instead of 
having Verb-* hits as an additional rule of (13). we should have 

( 1 7) AVNim, + Verb - NP aing 4- hits 

indicating that Verb is rewritten hits only in the context iVP s ,„ ff -. 

2 See § 8 1 for some examples of constructional homonymity. See my TJie 
logical structure oj linguistic theory (mimeographed/, "Three models for the 
description of language" (above, p. 22, fn. 3); C. F, Hockett, "fwo models of 
grammatical description," Linguistics Today, Word 10.210-33 (1954); R. S. 
Wells, "Immediate constituents," Language 23 81-117 (1947) for more detailed 
discussion. 



i 



PHRASE STRUCTURE 



29 



Correspondingly, (13 ii) will have to be restated to include NP iing 
and NP pl .s This is a straightforward generalization of (13). One 
feature of (13) must be preserved. howc\cr, as it is in (17): only a 
single element can be rewritten in any single rule; i.e., in (16), X 
must be a single symbol such as T, Verb, and not a sequence such 
as T+ N. If this condition is not met, we will not be able to recover 
properly the phrase structure of derived sentences from the 
associated diagrams of the form (15), as we did above. 

We can now describe more general!) the form of grammar 
associated with the theory of linguistic structure based upon 
constituent analysis. Hach such grammar is defined by a finite set 1, 
of initial strings and a finite set F of 'instruction formulas' of the 
form X -> Y interpreted : "rewrite X as Y." Though X need not be 
a single symbol, only a single symbol of A' can be rewritten in 
forming Y. In the grammar (13), the only member of the set I of 
initial strings was the single symbol Sententc, and F consisted of the 
rules (i) - (vi); but we might want to extend I to include, for 
example, Declarative Sentence, Interrogati\e Sentence, as additional 
symbols. Given the grammar [1, !■], we define a derhation as a 
finite sequence of strings, beginning with an initial string of I. and 
with each string in the sequence being derived from the preceding 
string by application of one of the instruction formulas of F. Thus 
(14) is a derivation, and the five-termed sequence of strings con- 
sisting of the first Jive lines of (14) is also a derivation. Certain 
derivations are terminated derivations, in the sense that their final 
string cannot be rewritten any further by the rules F. Thus (14) is a 
terminated derivation, but the sequence consisting of the first five 

3 Thus in a more complete grammar, '13ii] might be replaced by a set of 
rules that includes the following: 



NP Un9 ~*T YN \ 0 ( , Prepositional Phrase) 
NPpi *T f -V * S i l Prepositional Phrase) 

where S is the morpheme which js singular for verbs and pluial for nouns 
(* comes," "boys"), and 0 is the morpheme which is singular for nouns and 
plural for verbs ( 'boy," "come"). We shall omit all mention of first and second 
person throughout this discussion. Identification of the nominal and verbal 
number affix is actually of questionable validity. 




30 



SYNTACTIC STRUCTURES 



lines of (14) is not If a string is the last line of a terminated deriva- 
tion, we say that it is a terminal string. Thus the 4 man + hit + the + 
ball is a terminal string from the grammar (13). Some grammars of 
the form [X, P] may have no terminal strings, but we are interested 
only in grammars that do have terminal strings, i.e., that describe 
some language. A set of strings is called a terminal language if it is 
the set of terminal strings for some grammar [I, F]. Thus each such 
grammar defines some terminal language (perhaps the 'empty' 
language containing no sentences), and each terminal language is 
produced by some grammar of the form [£, FJ Given a terminal 
language and its grammar, we can reconstruct the phrase structure 
of each sentence of the language (each terminal string of the 
grammar) by considering the associated diagrams of the form (15), 
as we saw above We can also define the grammatical relations in 
these languages in a formal way in terms of the associated diagrams. 

4.2 In § 3 we considered languages, called "finite state languages", 
which were generated by finite state Markov processes. Now we 
arc considering terminal languages that are generated by systems of 
the form [£, h] These two types of languages are related in the 
following way 

Theorem: E\cry finite state language is a terminal language, but 
there are terminal languages which are not finite state languages 4 

The import of this theorem is that description in term» of phrase 
structure is essentially more powerful than description in terms of 
the elementary theory presented above in tj 3. As examples of 
terminal languages that are not finite state languages we have the 
languages (lOi), (lOii) discussed in §3. Thus the language (lOi), 
consisting of all and only the strings ah, aabb, aaabbb, ... can be 
produced by the [L, F] grammar (18). 

(18) I: 7 

F: Z->ab 
Z->aZb 

* See my ''Three models for the description of language" (above, p. 22, fn. 3) 
for proofs of this and related theorems about relative power of grammars. 



PHRASF STRUCTURE 



31 



This grammar has the initial string L (as (13) ha^ the initial string 
Sentence) and it has two rules. It can easily be seen that each 
terminated derivation constructed Irom (18) ends in a string of the 
language (IOi), and that all such strings are produced in this way. 
Similarly, languages of the form (lOn) can be produced by [I, b] 
grammars (lOin). however, cannot be produced by a grammar of 
this tvpe, unless the rules embody contextual restrictions 5 

In %; 3 we pointed out that the languages (IOi) and (lOn) corre- 
spond to subparts of English, and that therefore the finite state 
Markov process model is not adequate for English. We now see 
that the phrase structure model does not fail in such cases. We have 
not proved the adequacy of the phrase structure model, but we have 
shown that large parts of Hnglish which literally cannot be described 
in terms of the finite-state process model can be described in terms 
of phrase structure 

Note that in the case of (18), we can say that in the string aaahbb 
of ( 1 0 1), for example, ab is a Z, aabh is a Z, and aaabbb itself is a Z. 6 
Thus this particular string contains three 'phrases,' each of which 
is a Z. This is, of course, a very trivial language. It is important to 
observe that in describing this language we have introduced a 
symbol Z which is not contained in the sentences of this language. 
This is the essential fact about phrase structure which gives it its 
'abstract' character 

Observe also that in the case of both (13) and (18) (as in every 
system of phrase structure), each terminal string has many different 
representations For example, in the case of (13), the terminal 
string "the man hit the ball" is represented by the strings Sentence. 
_XP + VP y T+ A - VP, and all the other lines of (14). as well as bj 
such strings as NP+ Verb + NP> T+N + fut + NP, which would 
occur in other derivations equivalent to (14) in the sense there 
defined. On the level of phrase structure, then, each sentence of the 
language is represented by a set of strings, not by a single string as it 

5 See my "On certain formal properties of grammars", Information and 
Control 2.133 -167 (1959). 

* Where "is a" is the relation defined in § 4 1 in terms of such diagrams as 
(15). 



32 



SYNTACTIC STRt'CTURfcS 



is on the level of phonemes, morphemes, or words. Thus phrase 
structure, taken as a linguistic level, has the fundamentally different 
and nontnvial character which, as we saw in the last paragraph of 
§ 3, is required for some linguistic level. We cannot set up a hier- 
archy among the various representations of "the man hit the ball", 
we cannot subdivide the system of phrase structure into a finite set 
of levels, ordered from higher to lower, with one representation for 
each sentence on each of these sublcvels. For example, there is no 
way of ordering the elements \P and VP relative to one another. 
Noun phrases are contained within verb phrase*, and verb phrases 
within noun phrases, in English. Phrase structure must be con- 
sidered as a single level, with a set of representations lor each 
sentence of the language. There is a one-one correspondence 
between the properly chosen sets of representations, and diagrams 
of the form (15). 

4.3 Suppose that by a [X, b] grammar we can generate all of the 
grammatical sequences of morphemes of a language. In order to 
complete the grammar we must state the phonemic structure of 
these morphemes, so that the grammar will produce the grammatical 
phoneme sequences of the language. Bui this statement (which we 
would call the morphophonemics of the language) can also be given 
by a set of rules of the form "rewrite X m> Y'\ e.g., for English, 

(19) (i) walk-* wok 

(ii) take + past-* tuk 

(iii) hit + past-* hit 

(iv) .D - past-* . D id (where D — t or d ) 

( v ) Qmv + past-> - C unt , h t (where C unv is an un- 
voiced consonant) 

(vi) past -* d. . 

(vii) take-* teyk 
etc. 

or something similar. Note, incidentally, that order must be defi ned 
among these rules — e.g., (i.) must precede (v) or (vii). or we will 
derive such forms as , teykt for the past tense of take. In these 



PHRASE STRUCTURE 



33 



morphophonemic rules we need no longer require that only a single 
symbol be rewritten in each rule. 

We can now extend the phras- structure derivations by applying 
(19), so that we have a unified process for generating phoneme 
sequence from the initial string Sentence. This makes it appear as 
though the break between the higher level of phrase structure and 
the lower levels is arbitrary. Actually, the distinction is not arbi- 
trary. For one thing, as we have seen, the formal properties of the 
rules X-* Y corresponding to phrase structure are different from 
those of the morphophonemic rules, since in the case of the former 
we must require that only a single symbol be rewritten. Second, the 
elements that figure in the rules (19) can be classiiied into a finite set 
of levels (e.g., phonemes and morphemes; or, perhaps, phonemes, 
morphophonemes, and morphemes) each of which is elementary in 
the sense that a single string of elements of this level is associated 
with each sentence as its representation on this level (except in 
cases of homonymity), and each such string represents a single 
sentence. But the elements that appear in the rules corresponding 
to phrase structure cannot be classified into higher and lower levels 
in this way. We shall see below that there is an even more funda- 
mental reason for marking this subdivison into the higher level 
rules of phrase structure and the lower level rules that convert 
strings of morphemes into strings of phonemes 

The formal properties of the system of phrase structure make an 
interesting study, and it is easy to show that further elaboration of 
the form of grammar is both necessary and possible. Thus it can 
easily be seen that it would be quite advantageous to order the rules 
of the set F so that certain of the rules can apply only after others 
have applied. For example, we should certainly want all rules of the 
form (17) to apply before any rule which enables us to rewrite NP 
as NP + Preposition + NP, or the like; otherwise the grammar will 
produce such nonsentences as "the men near the truck begins work 
at eight." But this elaboration leads to problems that would carry 
us beyond the scope of this study. 



5 



i 



LIMITATIONS OF 
PHRASE STRUCTURE DESCRIPTION 



5.1 We have discussed two models for the structure of language, 
a communication theoretic model based on a conception of language 
as a Markov process and corresponding, in a sense, to the minimal 
linguistic theory, and a phrase structure model based on immediate 
constituent analysis. We have seen that the first is surely inadequate 
for the purposes of grammar, and that the second is more powerful 
than the first, and does not fail in the same way. Of course there 
are languages (in our general sense) that cannot be described in 
terms of phrase structure, but I do not know whether or not 
English is itself literally outside the range of such analysis. However, 
I think that there are other grounds for rejecting the theory of phrase 
structure as inadequate for the purpose of linguistic description. 

The strongest possible proof of the inadequacy of a linguistic 
theory is to show that it literally cannot apply to some natural 
language. A weaker, but perfectly sufficient demonstration of inade- 
quacy would be to show that the theory can apply only clumsily; 
that is, to show that any grammar that can be constructed in terms 
of this theory will be extremely complex, ad hoc, and *unrevealing\ 
that certain very simple ways of describing grammatical sentences 
cannot be accommodated within the associated forms of grammar, 
and that certain fundamental formal properties of natural language 
cannot be utilized to simplify grammars. We can gather a good deal 
of evidence of this sort in favor of the thesis that the form of gram- 
mar described above, and the conception of linguistic theory that 
underlies it, are fundamentally inadequate. 

The only way to test the adequacy of our present apparatus is to 
attempt to apply it directly to the description of English sentences. 



i 



LIMITATIONS OF PHRASE STRUCTURE DESCRIPTION 



35 



As soon as we consider any sentences beyond the simplest type, and 
in particular, when we attempt to define some order among the 
rules that produce these sentences, we find that we run into numer- 
ous difficulties and complications. To give substance to this claim 
would require a large expenditure of effort and space, and I can 
only assert here that this can be shown fairly convincingly. 1 
Instead of undertaking this rather arduous and ambitious course 
here, I shall limit myself to sketching a few simple cases in which 
considerable improvement is possible over grammars of the form 
[I, F]. In § 8 I shall suggest an independent method of demon- 
strating the inadequacy of constituent analysis as a means of des- 
cribing English sentence structure. 

5.2 One of the most productive processes for forming new sen- 
tences is the process of conjunction. If we have two sentences 
Z + X + f* / andZ+ Y + IV, and if Xand Y are actually constituents 
of these sentences, we can generally form a new sentence Z — X + 
and+ Y - W. For example, from the sentences (20a-b) we can form 
the new sentence (21). 

(20) (a) the scene of the movie - was in Chicago 
(b) the scene - of the play - was in Chicago 

(21) the scene - of the movie and of the play - was in Chicago. 

If X and >' are, however, not constituents, we generally cannot do 
this. 2 For example we cannot form (23) from (22a-b). 

1 See my The logical structure of linguistic theory for detailed analysis of this 
problem. 

* (21) and (23) are extreme cases in which there is no question about the 
possibility of conjunction. There are many less clear cases. For example, it is 
obvious that "John enjoyed the book and liked the play" (a string of the form 
NP — VP 4 and- - VP) is a perfectly good sentence, but many would question the 
grammatical ness of, e.g , "John enjoyed and my friend liked the play" (a string 
of the form NP -J Verb -r and - Verb - NP). The latter sentence, in which 
conjunction crosses over constituent boundaries, is much less natural than the 
alternative "John enjoyed the play and my friend liked it", but there is no 
preferable alternative to the former. Such sentences with conjunction crossing 
constituent boundaries are also, in general, marked by special phonemic features 
such as extra long pauses ( in our example, between "liked" and "the"), contrast- 
ive stress and intonation, failure to reduce vowels and drop final consonants in 



36 



SYNTACTIC STRUCTURES 



(22) (a) the - liner sailed down the river 
(b) the tugboat chugged up the - river 

(23) the - liner sailed down the and tugboat chugged up the - river. 

Similarly, if X and Y are both constituents, but are constituents of 
different kinds (i e., if in the diagram of the form (15) they each have 
a single origin, but this origin is labelled differently), then we cannot 
in general form a new sentence by conjunction For example, we 
cannot form (25) from (24a-b). 

(24) (a) the scene - of the movie - was in Chicago 
(b) the scene - that 1 wrote was in Chicago 

(25) the scene - of the movie and that I wrote - was in Chicago 

In fact, the possibility of conjunction offers one of the best criteria 
for the initial determination of phrase structure. We can simplify 
the description of conjunction if we try to set up constituents in 
such a way that the following rule will hold: 

(26) If S x and S 2 are grammatical sentences, and 5, differs from S 2 
only in that X appears in Si where Y appears in S 2 (i.e., 
S t = . . X. . and S 2 — - - Y. . ), and X and Y are constituents of 
the same type in S± and S 2 , respectively, then S 3 is a sentence, 
where S 3 is the result of replacing X by X -rand -r- Y in S x 
(i.e., S 3 = . .X + and+ Y. .). 

rapid speech, etc. Such features normally mark the reading of non-grammatical 
strings. The most reasonable way to describe this situation would seem to be b) 
a description of the following kind: to form fully grammatical sentences b> 
conjunction, it is necessary to conjoin single constituents, if we conjoin pairs of 
constituents, and these are major constituents fi e., 'high up* in the diagram 
fl5»>, the resulting sentences are semi-grammatical, the more completely we 
violate constituent structure by conjunction, the less grammat cal is the resulting 
sentence. This description requires that we generalize the grammatical- 
ungrammatical dichotomy, developing a notion of degree of grammat icalness. 
It is immaterial to our discussion, however, whether we decide to exclude such 
sentences as "John enjoyed and my friend liked the play" as ungrammatical, 
whether we include them as semi-grammatical, or whether we include them as 
fully grammatical but with special phonemic features. In any event they form 
a class of utterances distinct from "John enjoyed the play and liked the book," 
etc., where constituent structure is preserved perfectly, and our conclusion that 
the rule for conjunction must make explicit reference to constituent structure 
therefore stands, since this distinction will have to be pointed out in the gram- 
mar. 



LIMITATIONS OF PHRASL STRUCTURE DESCRIPTION 37 

Even though additional qualification is necessary here, the grammar 
is enormously simplified if we set up constituents in such a way that 

(26) holds even approximately That is, it is easier to state the 
distribution of "and" by means of qualifications on this rule than 
to do so directly without such a rule. But we now face the following 
difficulty: we cannot incorporate the rule (26} or anything like it ir 
a grammar [E, F] of phrase structure, because of certain fundamen- 
tal limitations on such grammars. The essential property of rule 
(26; is that in order to apply it to sentences S t and S 2 to form the 
new sentence S 3 we must know not only the actual form of S, 
and S 2 but also their constituent structure — we must know not only 
the final shape of these sentences, but also their 'history of deriva- 
tion ' But each rule X-*Y of the grammar [I, F] applies or fails 
to apply to a given *tnng by virtue of the actual substance of this 
string. The question of how this siring gradually assumed this form 
is irrelevant. If the string contains X as a substring, the rule X-* Y 
can apply to it; if not, the rule cannot apply. 

We can put this somewhat differently. The grammar [X, F] can 
also be regarded as a very elementary process that generates 
sentences not from "left to right" but from "top to bottom". 
Suppose that we have the following grammar of phrase structure: 

(27) I: Sentence 
F: A-.-K, 

A'„ -»>;,. 

Then we can represent this grammar as a machine with a finite 
number of internal statev including an initial and a final state. In 
its initial state it can produce only the element Sememe* thereb) 
moving into a new state. It can then produce any string Y, such that 
Sentence -*Y t is one of the rules of F in (27), again moving into a 
new state Suppose that Y t is the string . A, .. Then the machine 
can produce the string . Yj . . by "applying" the rule X } >Yj 
The machine proceeds in this way from state to state until it finally 
produces a terminal string; it is now in the final state. The machine 
thus produces derivations, in the sense of *?4. The important point 



38 



SYNTACTIC STRUCTURES 



is that the state of the machine is completely determined by the 
string it has just produced (i.e., by the last step of the derivation); 
more specifically, the state is determined by the subset of left-hand' 
elements X t of F which are contained in this last-produced string. 
But rule (26) requires a more powerful machine, which can "look 
back" to earlier strings in the derivation in order to determine how 
to produce the next step in the derivation. 

Rule (26) is also fundamentally new in a different sense. It makes 
essential reference to two distinct sentences S\ and S 2 , but in 
grammars of the [I, F] type, there is no way to incorporate such 
double reference. The fact that rule (26) cannot be incorporated 
into the grammar of phrase structure indicates that even if this 
form for grammar is not literally inapplicable to English, it is cer- 
tainly inadequate m the weaker but sufficient sense considered 
above. This rule leads to a considerable simplification of the 
grammar; in fact, it provides one of the best criteria for determining 
how to set up constituents. We shall see that there are many other 
rules of the same general type as (26) which play the same dual roie. 

5.3 In the grammar (13) we gave only one way of analyzing the 
element Verb, namely, as hit (cf. (13vi)). But even with the verbal 
root fixed (let us say. as take), there are many other forms that this 
element can assume, e g., takes, has+ taken, h ///-h take, has -r been + 
taken, is + being + taken, etc. The study of these "auxiliary verbs" 
turns out to be quite crucial in the development of English grammar. 
We shall see that their behavior is very regular and simply describ- 
able when observed from a point of view that is quite different from 
that developed above, though it appears to be quite complex if we 
attempt to incorporate these phrases directly into a [I, F] grammar. 

Consider first the auxiliaries that appear unstressed ; for example, 
"has" in "John has read the book" but not "does" in "John does 
read books." 3 We can state the occurrence of these auxiliaries in 
declarative sentences by adding to the grammar (13) the following 
rules : 

* We return to the stressed auxiliary "do" below, in § 7.1 (45M47i, 



i 



LIMITATIONS OF PHRASE STRUCTURE DESCRIPTION 



39 



(28) (i) Verb-* Aux -r V 

(li) V-*hit, take, nalk, read, etc. 

(iii) Aux-*C(M) {have 4- en) {be + i/ig) (be + en) 

(iv) M -+ will, can, may, shall, must 



(29) (i) 



S in the context NP stng 



0 in the context NP pt - 
past 

(ii) Let ^/stand for any of the affixes past, S, 0, en, ing. Let 
v stand for any M or V, or have or be (i.e., for any non- 
affix in the phrase Verb). Then: 

where # is interpreted as word boundary. 3 
(iiij Replace 4- by # except in the context v — Af. Insert # 
initially and finally. 

The interpretation of the notations :n (28 nO is as follows: we must 
choose the element C, and we may choose zero or more of the 
parenthesized elements in the gi\en order In (29i1 we may develop 
C into any of three morphemes, observing the contextual restrictions 
given. As an example of the application of these rules, we construct 
a derivation in the style of (14\ omitting the initial steps. 

(30) the 4- man 4- \ erb 4- the - book from (I3i-v) 

the 4- man i- Aux 4 V + the -1 book (28 i) 
the man - Aux 4- read 4- the — book (28ii) 
the 4- man C + have -r- en -r- be -r- ing 4- read + the + book 

(28 in) - we select the 
elements C, have + en 
and be -- ing. 

the 4- man i S 4- have + en + be t- ing j read 4- ///e book 

(29 i) 

4 We assume here that (13n) has been extended in the manner of fn. 3, 
above, p. 29, or something similar. 

6 If we were formulat'ng the theory of grammar more carefully, we would 
interpret # as the concatenation operator on the level of words, \s hile ■ L is the 
concatention operator on the level of phrase structure. (29; would then be part 
of the definition of a mapping which carries certain objects on the leve. of phrase 
structure (essentially, diagrams of the form (15i ■ into str.ngs of words. See my 
The logical structure of linguistic theory for a more careful formulation. 



40 



SYNTACTIC MRUClURfcS 



the - man + have -t S 4 be i cw % read -f ///c ^ r/if +• AooA: 

(29 n) three time;,. 

4f ///£• # $ hare + i # Ac # rcW + «ijf 41 /Ac * AwiA =ft 

(29iii) 

The morphophonemic rule'* (19), etc . will convert the last line of 
this derivation into: 

(31) the man has been reading the book 

in phonemic transcription. Similar!), every other auxiliary verb 
phrase can be generated. We return later to the question ol lurthcr 
restrictions thai must be placed on these rules so that only gramma- 
tical sequences can be generated Note, incidentally, that the 
morphophoncmic rules will have to include such rules as; will 4 .S'-> 
Mill. «/// + past-* would. I licse rules can be dropped if we rewrite 
< 28 ni ) m> that cither Cor W. but not both, can besetted But now 
the forms wouhL could, might, should must be added to (28 iv). and 
certain 'sequence of tense" statements become more complex. It is 
immateral to our further discussion which of the^c alternative 
analyscsis uidopted. Several other minor revisions arc possible. 

Notice that in order to apply (290 in (?0) we had to i-sc the fact 
that the 4- man is a singular noun phrase A /\, r . That is. wc had to 
refer back to some earlier step in the derivation in order to determine 
the constituent structure of the -*- n an (The alternative of ordering 
(29i)and the rule that develops A^,^ into the-*- man in such a wa\ 
that (29 1) must precede the latter is not possible, for a varietv ol 
reasons, sonic of which appear below ) Hence. (29 0, just like (26). 
goes beyond the elementary Markov tan character of grammars ol 
phrase structure, and cannot be incorporated within the [~. h] 
grammar. 

Rule (29ii) violates the requirements of [I, F] grammars even 
more severely. It also requires reference to constituent structure 
(i.e., past history of derivation) and in addition, we have n > way to 
express the required inversion within the terms ol phrase structure 
Note that this rule is useful elsewhere in the grammar, at least in 
the case where Aj is ing Thus the morphemes to and ing play a v ery 



LIMITATIONS OF PHRASE STRUCTURE. DESCRIPTION 41 

similar role wilhin the noun phrase in that they convert verb phrases 
into noun phrases, giving, e.g., 

fto prove that theorem 



| proving that theorem 



was difficult. 



etc. We can exploit this parallel by adding to the grammar (13) the 
rule 

'ing 



(33) NP Uq 



VP 



The rule (29 ii) will then convert ing + proie +- that + theorem into 
proving # that + theorem. A more detailed analysis of the IT shows 
that this parallel extends much further than this, in fact. 

The reader can easily determine that to duplicate the effect of 
(38 iii) and (29) without going beyond the bounds of a system [I, F] 
of phrase structure, it would be necessary to give a fairly complex 
statement Once again, as in the case of conjunction, we see that 
significant simplification of the grammar is possible if we are 
permitted to formulate rules of a more complex type than those that 
correspond to a system of immediate constituent analysis. By 
allowing ourselves the freedom of (29ii) we have been able to state 
the constituency of the auxiliary phrase in (28iii) w ithout regard to 
the interdependence of its elements, and it is always easier to describe 
a sequence of independent elements than a sequence of mutually 
dependent ones. To put the same thing differently, in the auxiliary 
verb phrase we really have discontinuous elements - e g., in (30), 
the elements ha\e. .en and be ing. But discontinuities cannot be 
handled within [I, F] grammars. 6 In <28iii) we treated these 

6 We might attempt to extend the notions of phrase structure to account 
for discontinuities It has been pointed out several times that fairly serious 
difficulties arise in any systematic attempt to pursue this course Cf. my 
"System of syntactic analys-s," Journal of Symbolic Logic 18 242 56 1 1953), 
C. i . Hockett, "A formal statement of morphemic analysis." Studies in Lin- 
guistus 10,27-39 (1952); idem, "Two models of grammatical description/ 
linguistics Today, Word 10.210-33 (1954 . Similarly, one might seek to remedy 
some of the other deficiencies of [I, F] grammars by a more complex account 
of phrase structure 1 think that such an approach k ill-advised, and that it can 
only lead to the development ot ad hoc and fruitless elaborations. It appears to 
be the case that the notions of phrase structure are quite adequate for a small 



42 



SYNTACTIC STRUCTURES 



elements as continuous, and we introduced the discontinuity by the 
very simple additional rule (29 ii). We shall see below, in § 7, that 
this analysis of the element Verb serves as the basis for a far-reach- 
ing and extremely simple analysis of several important features of 
English syntax. 

5.4 As a third example of the inadequacy of the conceptions of 
phrase structure, consider the case of the active-passive relation. 
Passive sentences are formed by selecting the element be + en in rule 
28 ni). But there are heavy restrictions on this element that make it 
unique among the elements of the auxiliary phrase. For one thing, 
be + en can be selected only if the following V is transitive (e.g., 
was + eaten is permitted, but not was + occurred) ; but with a few 
exceptions the other elements of the auxiliary phrase can occur 
freely with verbs. Furthermore, be -hen cannot be selected if the 
verb V is followed by a noun phrase, as in (30) (e.g.. we cannot in 
general have AT +- is Y V \-en+ V/>, even w hen V is transitive — we 
cannot have "lunch is eaten John"). Furthermore, if Kis transitive 
and is followed by the prepositional phrase by + A/\ then we must 
select be - en (we can have "lunch is eaten by John" but not ' John 
is eating by lunch," etc.). Finally, note that in elaborating (13) into 
a full-fledged grammar we will ha\e to place many restrictions on 
the choice of Km terms of subject and object in order to permit such 
sentences as: "John admires sincerity," "sincerity frightens John/' 
"John plays golf." "John drinks wine." while excluding the 'inverse' 
non-sentences 7 "sincer.ty admires John," "John frightens sincerity," 



part of the language and that ii le rest of the language can be derived by repeated 
app'i :at'on of a rather simple set of transformai.ons to the strings g.ven by the 
phrase structure grammar. If we were to attempt to extend phrase structure 
grammar to cover the entire language directly, we would lose the Simplicity of 
the lim.ted phrase structure grammar and of the transformational development. 
Thii approach would miss the main point of level construction v cf. first para- 
graph of § 3 I , namely, to rebuild the vast complexity of the actual language 
more elegantly and systematically by extracting the contribution to this com- 
plexity of several l.nguistic levels, each of which is simple in itself. 

7 Here too we mjght make use of a notion of levels of grammaticalness as 
suggested :n footnote 2, p. 35. Thus "'sincerity adm.res John,"' though clearly 
less grammatical than "John admires sincerity,*' is certainly more grammatical 



LIMITATIONS OF PHRASE STRUCTURE DESCRIPTION 



43 



"golf plays John," "wine drinks John". But this whole network of 
restrictions fails completely when we choose be 4 en as part of the 
auxiliary verb. In fact, m this case the same selectional dependen- 
cies hold, but in the opposite order. That is, for every sentence 
AT,— V—NP 2 wc can have a corresponding sentence AT^ — ij+ Ven 
— by-]- NP X . If we try to include passives directly in the grammar 
(13), we shall have to restate all of these restrictions in the opposite 
order for the case in which be + en is chosen as part of the auxiliary 
verb. This inelegant duplication, as well as the special restrictions 
involving the element be + en, can be avoided only if we deliberately 
exclude passives from the grammar of phrase structure, and 
reintroduce them by a rule such as: 

(34) If Si is a grammatical sentence of the form 

MP l — Aux — V- \P 2 , 
then the corresponding string of the form 

NP 2 - Aux 4- be h en - V-by+ V/^ 
is also a grammatical sentence. 

r-or example, if John - C- admire - sincerity is a sentence, then 
sincerity -C + be ±en — admire - by V John (which by (29) and 
(19) becomes "sincerity is admired by John") is also a sentence. 

We can now drop the element be J en, and all of the special 
restrictions associated with it, from (28 hi). The fact that be + en 
lequires a transitive verb, that it cannot occur before V + NP. that it 
must occur before V^by + AV (where (-'is transitive), that it inverts 
the order of the surrounding noun phrases, is in each case an 
automatic consequence of rule (34) This rule thus leads to a con- 
siderable simplification of the grammar. But (34) is well beyond the 
limits of [I, F] grammars. Like (29 n), it requires reference to the 
constituent structure of the string to which it applies and it carries 
out an inversion on this string in a structurally determined manner. 



than "of admires John," I believe that a workable notion of degree of gramma- 
ticalness can be developed in purely formal terms I cf. m> The logical structure of 
linguistic theory , but this goes beyond the bounds of the present discussjon. 
See § 7.5 for an even stronger demonstration that inversion is necessary .n the 
passive. 



44 



SYNTACTIC STRUCTURES 



5.5 We have discussed three rules ((26), (29), (34)) which materially 
simplify the description of English but w hich cannot be incorporat- 
ed into a [L, F J grammar. There are a great many other rules of this 
type, a few of which we shall discuss below. By further study of the 
limitations of phrase structure grammars with respect to English we 
can show quite conclusively that these grammars will be so hope- 
lessly complex that they will be without interest unless we incor- 
porate such rules. 

If we examine carefully the implications of these supplementary 
rules, however, we see that they lead to an entirely new conception 
of linguistic structure. Let us call each such rule a "grammatical 
transformation." A grammatical transformation T operates on a 
given string (or, as in the case of (26), on a set of stnngs) with a 
given constituent structure and converts it into a new string with a 
new derived constituent structure. To show exactly how this 
operation is performed requires a rather elaborate study which 
would go far beyond the scope of these remarks, but we can in fact 
develop a certain fairly complex but reasonably natural algebra of 
transformations having the properties that we apparently require 
for grammatical description.* 

From these few examples we can already detect some of the 
essential propeities of a transformational grammar. For one thing, 
it is clear that we must define an order of application on these 
transformations. The passive transformation (34). for example, 
must apply before (29). It must precede (29i), in particular, so that 
the verbal element in the resulting sentence will have the same 
number as the new grammatical subject of the passive sentence. 
And it must precede (29ii) so that the latter rule will apply properly 
to the new inserted clement be 1 en. (In d scussing the question of 
whether or not (29 1) can be fitted into a [I, F] grammar, we men- 
tioned that this rule could not be required to apply before the rule 

8 See my "Three models f.>r the description of language" 'above, p. 22, fn. 3. 
for a brief acccint of iransf^rmat.ons, and The logical structure of linguistic 
rheorj ar.d Traihf<>rmatunal 4naly<;i\ for a detailed development of trans- 
formational algebra and transformational grammars. Sec Z. S Harris, "Cooc- 
currence and Tramformat'"ns in hngjistic structure," Language 13 283 340 
(1957), for a somewhat different approach to transformational analysis. 



LIMITATIONS OF PHRASE STRUCTURE DtSCRIPTION 



45 



analyzing N}\ ing into (he ---man. etc. One reason for this is now 
obvious (29 i) must apply after (341, but (34) must apply after the 
analysis of A7 J Ufl9 , or we will not have the proper selectional 
relations between the subject and verb and the \crb and 'agent" in 
the passive.) 

Secondly, note that certain transformations are obligatory, 
whereas others are only optional. For example, (29) must be applied 
to every derivation, or the result will simply not be a sentence. 9 
But (34), the passive transformation, may or may not be applied in 
any particular case. Either way the result is a sentence. Hence (29) 
is an obligatory transformation and (34) is an optional trans- 
formation. 

This distinction between obligatory and optional transformations 
leads us to set up a fundamental distinction among the sentences of 
the language Suppose that we have a grammar G with a [Z, F] part 
and a transformational part, and suppose that the transformational 
part has certain obligatory transformations and certain optional 
ones. Then we define the kernel of the language (in terms of the 
grammar G) as the set of sentences that are produced when we 
apply obligatory transformations to the terminal strings of the 
[I, P] grammar. The transformational part of the grammar will be 
set up in such a way that transformations can apply to kernel 
sentences (more correctly, to the forms that underlie kernel sen- 
tences- i.e., to terminal strings of the [I. F] part of the grammar) 
or to prior transforms. Thus every sentence of the language will 
either belong to the kernel or will be derived from the strings 
underlying one or more kernel sentences by a sequence of one or 
more transformations. 

From these considerations we are led to a picture of grammars as 
possessing a natural tripartite arrangement. Corresponding to the 
level of phrase structure, a grammar has a sequence of rules of the 
form ,Y-» Y, and corresponding to lower levels it has a sequence of 

• But of the three parts of (29 i„ only the third is obligatory. Thai is, past 
may occur after NP itag . or NP pt . Whenever we have an element such as Cin (29i) 
which must be developed, but perhaps in several alternative ways, we can order 
the alternatives and make each one but the last optional, and the last , obligatory. 



46 



SYNTACTIC STRUCTURES 



morphophonemic rules of the same basic form. Linking these two 
sequences, it has a sequence of transformational rules Thus the 
grammar will look something like this: 

(35) Z: Sentence: 
F: X t -> Y x 

Phrase structure 



Z, - W x 



Transformational structure 



Morphophonemics 



Z„ *W 



m 



To produce a sentence from such a grammar we construct an 
extended derivation beginning with Sentence. Running through 
the rules of F we construct a terminal string that will be a sequence 
of morphemes, though not necessarily in the correct order. Wc then 
run through the sequence of transformations [j, . T,, applying 
each obligatory one and perhaps certain optional ones These 
transformations may rearrange strings or may add or delete 
morphemes. As a result they yield a string of words. We then run 
through the morphophonemic rules, thereby converting this string 
of words into a string of phonemes. The phrase structure segment 
of the grammar will include suoh rules as those of (13), (17) and (28). 
The transformational part will irclude such rules as (26), (29) and 
(34), formulated properly in the terms that must be developed in a 
full-scale theory of transformations The morphophonemic part 
will include such rules as (19) This sketch of the process of gene- 
ration of sentences must (and easily can) be generalized to allow for 
proper functioning of such rules as (26) which operate on a set of 
sentences, and to allow transformations to reapply to transforms so 
that more and more complex sentences can be produced. 

When we apply only obligatory transformations in the generation 
of a given sentence, we call the resulting sentence a kernel sentence. 
Further investigation would show that in the phrase structure and 



L 



LIMITATIONS OF PHRASE STRUCTURE DESCRIPTION 



47 



morphophonemic parts of the grammar we can also extract a 
skeleton of obligatory rules that must be applied whenever we reach 
them m the process of generating a sentence. In the last few para- 
graphs of § 4 we pointed out that the phrase structure rules lead to a 
conception of linguistic structure and "level of representation" that 
is fundamentally different from that provided by the morphophone- 
mic rules. On each of the lower levels corresponding to the lower 
third of the grammar an utterance is, in general, represented by a 
single sequence of elements. But phrase structure cannot be broken 
dovsn into sublevels: on the level of phrase structure an utterance is 
represented by a set of strings that cannot be ordered into higher or 
lower levels. This set of representing strings is equivalent to a 
diagram of the form (15). On the transformational level, an 
utterance is represented even more abstractly in terms of a sequence 
of transformations by which it is derived, ultimately from kernel 
sentences (more correctly, from the strings which underlie kernel 
sentences). There is a very natural general definition of "linguistic 
level" that includes all of these cases, 10 and as we shall see later, 
there is good reason to consider each of these structures to be a 
linguistic level. 

When transformational analysis is properly formulated we find 
that it is essentially more powerful than description in terms of 
phrase structure, just as the latter is essentially more powerfull than 
description in terms of finite state Markov processes that generate 
sentences from left to right. In particular, such languages as (lOiii) 
which he beyond the bounds of phrase structure description with 
context-free rules can be derived transformationally. 11 It is 
important to observe that the grammar is materially simplified when 
we add a transformational level, since it is now necessary to provide 
phrase structure directly only for kernel sentences — the terminal 
strings of the [£, F] grammar are just those which underlie kernel 

lft Cf. The logical structure of linguistic theory and Transformational Anal) sis. 

11 Let G be a [£, F] grammar with the initial string Sentence and with the set 
of all finite strings of a's and b's as its terminal output There is such a grammar. 
Let G' be the grammar which contains G as its phrase structure part, supple- 
mented by the transformation T that operates on any string K which is a 
Sentence, converting it into K - K. Then the output of G' is (luiii). Cf. p. 31. 



48 



SYNTACTIC STRUCTURES 



sentences. We choose the kernel sentences in such a way that the 
terminal strings underlying the kernel are easily derived by means 
of a [I, F] description, while all other sentences can be derived from 
these terminal strings by simply statable transformations. We ha\e 
seen, and shall see again below, several examples of simplifications 
resulting from transformational analysis. Full-scale syntactic 
investigation of English provides a great many more cases. 

One further point about grammars of the form (35) deserves 
mention, since it has apparently led to some misunderstanding. 
We have described these grammars as devices for generating 
sentences. This formulation has occasionally led to the idea thai 
there is a certain asymmetry in giammatical theory in the sense 
that grammar is taking the point of view of the speaker rather 
than the hearer; that it is concerned with the process of producing 
utterances rather than the 'inverse' process of analyzing and 
reconstructing the structure of given utterances. Actually, gram- 
mars of the form that we have been discussing are quite neutral 
as between speaker and hearer, between synthesis and analysis 
of utterances. A grammar does not tell us how to synthesize a 
specific utterance, it doc's not tell us how to analyze a particular 
given utterance In fact, these two tasks which the speaker and 
hearer must perform are essentially the same, and arc both outside 
the scope of grammars of the form (35) Each such grammar is 
simply a description of a certain set of utterances, namely, those 
which it generates. From this grammar we can reconstruct the 
formal relations that hold among these utterances in terms of the 
notions of phrase structure, transformational structure, etc. Perhaps 
the issue can be clarified by an analogy to a part of chemical theory 
concerned with the structurally possible compounds. This theory 
might be said to generate all physically possible compounds just as 
a grammar generates all grammatically 'possible' utterances. It 
would serve as a theoretical basis for techniques of qualitative 
analysis and synthesis of specific compounds, just as one might rely 
on a grammar in the investigation of such special problems as 
analysis and synthesis of particular utterances 



6 



ON THE GOALS OF LINGUISTIC THEORY 



6.1 In §§ 3, 4 two models of linguistic structure were developed: a 
simple communication theoretic model and a formalized version of 
immediate constituent analysis. Each was found to be inadequate, 
and in § 5 I suggested a more powerful model combining phrase 
structure and grammatical transformations that might remedy these 
inadequacies. Before going on to explore this possiblity, I would 
like to clarify certain points of view that underlie the whole approach 
of his study. 

Our fundamental concern throughout this discussion of linguistic 
structure is the problem of justification of grammars. A grammar 
of the language L is essentially a theory of L. Any scientific theory 
is based on a finite number of observations, and it seeks to relate the 
observed phenomena and to predict new phenomena by construct- 
ing general laws in terms of hypothetical constructs such as (in 
physics, for example) "mass" and "electron." Similarly, a grammar 
of English is based on a finite corpus of utterances (observations), 
and it will contain certain grammatical rules (laws) stated in terms 
of the particular phonemes, phrases, etc., of English (hypothetical 
constructs). These rules express structural relations among the 
sentences of the corpus and the indefinite number of sentences 
generated by the grammar beyond the corpus (predictions). Our 
problem is to develop and clarify the criteria for selecting the correct 
grammar for each language, that is, the correct theory of this 
language. 

Two types of criteria were mentioned in §2.1. Clearly, every 
grammar will have to meet certain external conditions of adequacy; 
e.g., the sentences generated will have to be acceptable to the native 



50 



SYNTACTIC STR1CURKS 



speaker. In § 8 we shall consider several other external conditions of 
this sort. In addition, we pose a condition of generality on gram- 
mars; we require that the grammar of a given language be construct- 
ed m accordance with a specific theory of linguistic structure in 
which such terms as "phoneme" and "phrase" are defined inde- 
pendently of any particular language. 1 If we drop either the external 
conditions or the generality requirement, there will be no way to 
choose among a vast number of totally different 'grammars/ each 
compatible with a given corpus. But, as we observed in § 2.1, these 
requirements jointly give us a very strong test of adequacy for a 
general theory of linguistic structure and the set of grammars that 
it provides for particular languages. 

Notice that neither the general theory nor the particular gram- 
mars are fixed for all time, in this view. Progress and revision may 
come from the discovery of new facts about particular languages, 
or from purely theoretical insights about organization of linguistic 
data - ■ that is, new models for linguistic structure. But there is also 
no circularity in this conception. At any given time we can attempt 
to formulate as precisely as possible both the general theory and the 
set of associated grammars that must meet the empirical, external 
conditions of adequacy. 

We have not yet considered the following very crucial question: 
What is the relation between the general theory and the particular 
grammars that follow from it? In other words, what sense can we 
gi\e to the notion "follow from," in this context? It is at this point 
that our approach will diverge sharply from many theories of 
linguistic structure. 

The strongest requirement that could be placed on the relation 
between a theory of linguistic structure and particular grammars is 
that the theory must provide a practical and mechanical method for 

1 I presume that these two conditions are similar to what Hjelmslev has in 
mind when he speaks of the appropriateness and arbitrariness of linguistic theory. 
Cf. L. Hjelmslev, Prolegomena to a theory of language - Memoir 7, Indiana 
Vmversity Publications Antropology and Linguistics (Baltimore, 1953j, p 8. 
See also Hockett's discussion of ' metacntena"' for linguistics ("Two models of 
grammatical description," Linguistics Today, Word 10.232-3) in this connection. 



ON THE GOALS OF LINGUISTIC THEORY 



51 



actually constructing the grammar, given a corpus of utterances. 
Let us say that such a theory provides us with a discovery procedure 
for grammars. 

A weaker requirement would be that the theory must provide a 
practical and mechanical method for determining whether or not a 
grammar proposed for a given corpus is, in fact, the best grammar 
of the language from which this corpus is drawn. Such a theory, 
which is not concerned with the question of how this grammar was 
constructed, might be said to provide a decision procedure for 
grammars. 

An even weaker requirement would be that given a corpus and 
given two proposed grammars G, and G 2 , the theory must tell us 
which is the better grammar of the language from which the corpus 
is drawn. In this case we might say that the theory provides an 
evaluation procedure for grammars. 

These theories can be represented graphically in the following 
manner. 



(36) © 



CORPUS 






GRAMMAR fc 









(ii) 



GRAMMAR 




YES 


CORPUS 
■ > 




NO 







(ui) Gi ^ 




G t 


G 2 






CORPUS 


► 


fr> 



52 



SYNTACTIC STRUCTURES 



Figure (36i) represents a theory conceived as a machine with a 
corpus as its input and a grammar as its output ; hence, a theory that 
provides a discovery procedure. (36 11) is a device with a grammar 
and a corpus as its inputs, and the answers "yes" or "no" as its 
outputs, as the grammar is or is not the correct one; hence, it 
represents a theory that provides a decision procedure for gram- 
mars. (36iii) represents a theory with grammars Gj and G 2 and a 
corpus as inputs, and the more preferable of G, and G 2 as output, 
hence a theory that provides an evaluation procedure for gram- 
mars. 2 

The point of view adopted here is that it is unreasonable to 
demand of linguistic theory that it provide anything more than a 
practical evaluation procedure for grammars. That is, we adopt the 
weakest of the three positions described above As I interpret most 
of the more careful proposals for the development of linguistic 
theory, 3 they attempt to meet the strongest of these three require- 
ments. That is, they attempt to state methods of analysis that an 
investigator might actually use, if he had the time, to construct a 
grammar of a language directly from the raw data. I think that it is 
very questionable that this goal is attainable in any interesting w ay, 

8 The basic question at issue is not changed if we are willing to accept a 
small set of correct grammars instead of a single one 

8 For example, B. Bloch, "A set of postulates for phonemic analysis," 
Language 24.3-46 (1948); N. Chomsky, "Systems of syntactic analysis," 
Journal of Symbolic Logic 18.242-56 (1953); Z. S. Harr.s, "1 rom phoneme to 
morpheme." Language 31 190-222 (1955 1, idem, Methods in structural linguistic!, 
(Chicago, 1951); C. F. Hockett, "A formal statement of morphemic analysis," 
Studies in Linguistics 10.27- 39 (1952), idem, "Problems of morphemic anabsis." 
Language 23 321-43 (1947); R. S. Wells, "Immediate constituents," Language 
23.81 -117 (1947); and many other works. Although disco\ery procedures are 
the explicit goal of these works, we often find on careful examination that the 
theory that has actually been constructed furnishes no more than an evaluation 
procedure for grammars. For example, Hockett states h>s aim in "A formal 
statement of morphemic analysis" as the development of "formal procedures 
by which one can work from scratch to a complete description of the pattern of 
a language" (p. 27); but what he actually does is describe some of the formal 
properties of a morphological analysis and then propose a "criterion whereby 
the relative efficiency of two possible morphic solutions can be determined; 
with that, we can choose the maximally efficient possibility, or, arbitrarily, any 
one of those which are equally efficient but more efficient than all others" (p. 29). 



ON THE GOALS OF LINGUISTIC THEORY 



53 



and I suspect that any attempt to meet it will lead into a maze of 
more and more elaborate and complex analytic procedures that 
will fail to provide answers for many important questions about the 
nature of linguistic structure. I believe that by lowering our sights 
to the more modest goal of developing an evaluation procedure for 
grammars we can focus attention more clearly on really crucial 
problems of linguistic structure and we can arrive at more satisfying 
answers to them. The correctness of this judgment can only be 
determined by the actual development and comparison of theories 
of these various sorts. Notice, however, that the weakest of these 
three requirements is still strong enough to guarentee significance 
for a theory that meets it There are few areas of science in which one 
would seriously consider the possibility of developing a general, 
practical, mechanical method for choosing among several theories, 
each compatible with the available data 

In the case of each of these conceptions of linguistic theory we 
have qualified the characterization of the type of procedure by the 
word "practical". This vague qualification is crucial for an empiri- 
cal science. Suppose, for example, that we were to evaluate gram- 
mars by measuring some such simple properly as length. Then it 
would be correct to say that we have a practical evaluation proce- 
dure for grammars, since we could count the number of symbols 
they contain; and it would also be literally correct to say that we 
have a discovery procedure, since wc can order all sequences of the 
finite number of symbols from which grammars are constructed in 
terms of length, and we can test each of these sequences to see if it 
is a grammar, being sure that after some finite amount of time we 
shall rind the shortest sequence that qualifies. But this is not the 
type of discovery procedure that is contemplated by those who are 
attempting to meet the strong requirement discussed above 

Suppose that we use the word "simplicity" to refer to the set of 
formal properties of grammars that we shall consider in choosing 
among them. Then there are three main tasks in the kind of pro- 
gram for linguistic theory that we have suggested. First, it is 
necessary to state precisely (i f possible, w ith operational, behavioral 
tests) the external criteria of adequacy for grammars. Second, we 



54 



SYNTACTIC STRUCTURES 



must characterize the form of grammars in a general and explicit 
way so that we can actually propose grammars of this form for 
particular languages. Third, we must analyze and define the notion 
of simplicity that we intend to use in choosing among grammars all 
of which are of the proper form . Completion of the latter two tasks 
will enable us to formulate a general theory of linguistic structure in 
which such notions as "phoneme in L". "phrase in L", "transform- 
ation in L" are defined for an arbitrary language L in terms of 
physical and distributional properties of utterances of L and formal 
properties of grammars of L. 4 For example, we shall define the set 
of phonemes of L as a set of elements which have certain physical 
and distributional properties, and which appear in the simplest 
grammar for L. Given such a theory, we can attempt to construct 
grammars for actual languages, and we can determine whether or 
not the simplest grammars that we can find (i.e., the grammars that 
the general theory compels us to choose) meet the external con- 
ditions of adequacy We shall continue to revise our notions of 
simplicity and out characterization of the form of grammars until 
the grammars selected by the theory do meet the external conditions. 5 
Notice that this theory may not tell us, in any practical way, how- 
to actually go about constructing the grammar of a given language 
from a corpus. But it must tell us how to evaluate such a grammar; 
it must thus enable us to choose between two proposed grammars. 

In the preceding sections of this study we have been concerned 
with the second of these three tasks. We ha\e assumed that the set 
of grammatical sentences of English is given and that we have some 
notion of simplicity, and we have tried to determine what sort of 
grammar will generate exactly the grammatical sentences in some 
simple way. To formulate this goal in somewhat different terms, 

4 Linguistic theory will thus be formulated in a metalanguage to the lan- 
guage in which grammars are written — a metametalanguage to any language 
for which a grammar is constructed. 

s We may in fact revise the criteria of adequacy, too, in the course of re- 
search. That is, wc may dec ide that certain of these tests do not apply to gram- 
matical phenomena. The subject matter of a theory is not completely determin- 
ed in advance of investigation. It is partially determined by the possibility of 
giving an organized and systematic account of some range of phenomena. 



CM IHL GOALS OF LINGUISTIC THEORY 55 

wc remarked above that one of the notions that must be defined in 
genera] linguistic theory is "sentence in L." Filtering into the 
ochmtMns will be such lerim as " observed utterance in L'\ "sim- 
plicity of the grammar of L," etc. This general theory is accordingly 
concerned with Clarifying the relation between the set of grammatic- 
al sentence and the set of observed sentences Our investigation ot 
the structure of the former set is a preparatory study, proceeding 
fr.Mii the assumption that before we can characterize this relation 
clcarlv, we will have to know a great deal more about the formal 
properties of each of these sets. 

In 7 below, we shall continue to investigate the relative com- 
plexity ol various wjyv ol describing English structure. In partic- 
ular, ue shal 1 be concerned with the question of whether the whole 
grammar < simplified it we consider a certain class of sentences to 
be kerne: sentences or if we consider them to be derived by trans- 
format on We thus arrive at certain decisions about the structure 
of Engl. sh. In ^ 8 we shall argue that there is independent evidence 
in favor ol our method lor selecting grammars. That is, we shall try 
to show that the simpler grammars meet certain external conditions 
of adequacy while the more complex grammars that embody 
different decisions about assignment ot sentences to the kernel, etc., 
lail these conditions. These results can be no more than suggestive, 
however, until we give a rigorous account of the nation of simplicity 
employed. I th-nk that such anaccountcan be given. but this would 
go .->eyi>nd the scope ol the present monograph. Nevertheless, it 
should be fairly clear that under any reasonable definition of 
"simplicity (f grammar", most of the dec.sions about relative 
complexity that we reach below w.ll stand. J 

Notice that simplicity is a systematic measure, the only ultimate 

0 See my The IvgUul vim lure of I tug a t t tic theory for discussion of methods 
for evakut ng grammars in terms of formal properties of simplicity 

Wc arc not. incidental v. denying the usefulness of e>en partially adequate 
discover, procedures. They may prov ide valuable hi nts to tl ic practicing linguist 
or tiey may lead to a small set of grammars that can then be evaluated. Our 
mam point is that a linguist c theory should not be identified w.th a manual 
of useful procedures, nor should it be expected to pro\-dc mechanical pioce- 
dures for the discovery of grammars. 



56 



SYNTACTIC STRUCTURFS 



criterion in evaluation is the simplicity of the whole system. In 
discussing particular cases, we can only indicate how one or another 
decision will affect the over-all complexity. Such validation can 
only be tentative, since by simplifying one part of the grammar we 
may complicate other parts. It is when we find that simplification 
of one part of the grammar leads to corresponding simplification of 
other parts that we feel that we are really on the right track. Below, 
we shall try to show that the simplest transformational analysis of 
one class of sentences does quite frequently clear the way to a 
simpler analysis of other classes. 

In short, we shall never consider the question of how one might 
have arrived at the grammar w hose simplicity is being determined ; 
e.g., how one might have discovered the analysis of the verb phrase 
presented in 5.3. Question of this sort arc not relevant to the 
program of research that we have outlined above. One may arrive 
at a grammar by intuition, guess-work, all sorts of partial methodo- 
logical hints, reliance on past experience, etc It is no doubt possible 
to give an organized account of many useful procedures of analysis, 
but it is questionable whether these can be formulated rigorously, 
exhaustively and simply enough to qualify as a practical and mecha- 
nical discovery procedure. At any rate, tlm problem is not within 
the scope of our investigations here. Our ultimate aim is to prov ide 
an objective, non-intuitive way to evaluate a grammar once present- 
ed, and to compare il with other proposed grammars. We are thus 
interested in describing the form of grammars (equnalentfv, the 
nature of linguistic structure) and investigating the empirical con- 
sequences of adopting a certain model for linguistic structure, 
rather than in showing how. in principe. one might have armed at 
the grammar of a language. 

6.2 Once we have disclaimed any intention of finding a practical 
discovery procedure for grammars, certain problems that have been 
the subject of intense methodological controversy dimply do not 
arise. Consider the problem of interdependence of levels. It has 
been correctly pointed out that if morphemes are defined in terms of 
phonemes, and. simultaneously, morphological considerations aie 



OK THE GOALS OF LINGUISTIC THEORY 



57 



considered relevant to phonemic analysis, then linguistic theory may 
be nullified by a real circularity. However, interdependence of levels 
does not necessarily lead to circularity. In this case, for example, we 
can define "tentative phoneme set" and "tentative morpheme set" 
independent!} and we can develop a relation of compatibility that 
holds between tentative phoneme sets and tentative morpheme sets. 
We can then define a pair of a phoneme set and a morpheme set for 
a given language as a compatible pair of a tentative phoneme set and 
a tentative morpheme set. Our compatibility relation may be 
framed partially in terms of simplicity considerations; that is, we 
may define the phonemes and morphemes of a language as the 
tentative phonemes and morphemes which, among other things, 
jointly lead to the simplest grammar. This gives us a perfectly 
straightforward way to define interdependent levels with no cir- 
cularity. Of course, it does not tell us how to find the phonemes and 
morphemes in a direct, mechanical way. But no other phonemic or 
morphological theory really meets this strong requirement either, 
and there is little reason to believe that it can be met significantly. 
In an> event, when we lower our aims to the development of an 
evaluation procedure, there remains little motivation for any 
objection to mixing of levels, and there is no difficulty in avoiding 
circularit) in the definition of interdependent levels. 7 

See '/ S. Harris, Methods in structural linguistics (Chicago, 195l.t (e.g , 
Appendix to ~.4, Appendix to if. 2, chapters 9, 12 for examples of procedures 
which lead to interdependent levels. I think that Fowler's objections to Harr.s' 
morphological procedures 'cf . Language 28 504 9 [1952J) can he met without 
difficulty by a noncircular formulation of the type just proposed Cf C F 
Hockett, A manual of phonology = Memoir II, Indiana Lniversity Publications 
m Anthropology and Linguistics (Baltimore, 1955), idem, "Two fundamental 
problems n phonemics." Studies in Linguistics 7 33 1 1949); R. Jakobson. "The 
phonemic and grammatical aspects of language and their interrelation," 
Proceed.ngs of the Si\th International Congress of Lingw'sts 5-18 \Paris. 1948), 
k. L. Pike, "Grammatical prcrequ ; sites to phoncm c ana.ysis," Word 3.155-72 
(1947). idem, "More on grammatical prerequisites," Word 8.106-21 (1952), lor 
further discussion on interdependence of levels. Also N. Chomsky, M HalJe, 
F. LukofT. "On accent and juncture in English." For Roman Jakobson I's-Gra- 
venhagc, 1956). 65-80 

Bar-H llel has suggested in "Logical syntax and scmant.es", Language 
30.230 7 (1954) that Pike's proposals can be formali zed without the circularity 



58 



SYNTACTIC STRUCTURES 



Many problems of morphemic analysis also receive quite simple 
solutions if we adopt the general framework outlined above. In 
attempting to develop discovery procedures for grammars we are 
naturally led to consider morphemes as classes of sequences of 
phonemes, i e., as having actual phonemic 'content' in an almost 
literal sense. This leads to trouble in such well-known cases as 
English "took" tuk , where it is difficult without artificiality to 
associate any part of this word with the past tense morpheme which 
appears d\ t in "walked" wokt , as d in "framed 1 " Treymd , etc. 
We cm avoid all such problems by regarding morphology and 
phonology as two distinct but interdependent levels of represen- 
tation, related in the grammar bv morphophoncmic rules such as 
(19) finis "took" is represented on the morphological le\el as 
taki - pa\t just as "walked" is represented as walk H past. The 
morphophonemic rules (I9u), (19 v), respectively, carry these strings 
of morphemes into tuk , wokt . The only difference between the 
two.cases is that (19\) is a much more general rule than (19ii) * If we 
give up the idea that higher levels are literally constructed out of 

:hat many sense m them by the use of recursive definitions He does not pursue 
this suggestion in any detail, and my own feeling is that success along these lines 
is unlikely Moreovet. .f we are sai.-sried with an evaluation procedure for 
grammars, we can construct interdependent lex els with only direct definitions, as 
we have just seen. 

The problem of interdependence of phonemic and morphemic levels must not 
be confused w:th the question of whether morphological information is required 
to read a phonemic transcript on Even if morpholog xal considerations are 
considered relevant to determining the phonemes of a language, it may still be 
the cabC that the phonemic transcription provides complete 'reading* rules with 
no reference to other levels Cf N Chomsky, M. Halle, F\ Lukoff, "On accent 
and juncture in Fnghsh," For Roman Jakobson ('s-Gravenhage. 1956), 65 80, 
for d-scussion and examples. 

8 Hockctt gives a very clear presentation of this approach to levels in 
A manual of phonology (1955), p. 15. In "Two models of grammatical des- 
cription,** Linguistics Today, Word 10.210 33 (1954), Hockett rejected a solution 
vers much like the one we have just proposed on the grounds that "took and take 
are partly s mnar i n phonemic shape just as are baked and bake, and similar in 
meaning also .n the same way, this fact should not be obscured" (p. 224). But 
the similarity .n meaning ts not obscured in our formulation, since the mor- 
pheme past appears in the morphemic representation of both "took" and 
"baked." And the similarity in phonemic shape can be brought out in the actual 



ON THE GOALS OF LINGUISTIC THhORY 



59 



lower level elements, as I think we must, then it becomes much more 
natural to consider even such abstract systems of representation as 
transformational structure (where each utterance is represented by 
the sequence of transformations by which it is derived from a 
terminal string of the phrase structure grammar) as constituting 
a linguistic level 

We are not actually forced to gi\e up hope of finding a practical 
discover} procedure by adopting either the view that levels are 
interdependent, or the conception of linguistic levels as abstract 
systems of representation related only by general rules. Never- 
theless, I think it is unquestionable that opposition to mixing levels, 
as well as the idea that each level is literally constructed out of lower 
level elements, has its origin in the attempt to develop a discovery 
procedure for grammars, (f we renounce this goal and if v\e dis- 
tinguish clearly between a manual of suggestive and helpful 
procedures and a theory of linguistic structure, then there is little 
reason for maintaining either of these rather dubious positions 

There are many other commonly held views that seem to lose 
much of their appeal if we formulate our goals in the manner 
suggested above. Thus it is sometimes argued that work on syn- 
tactic theory is premature at this time in view of the fact that many 
of the problems that arise on the lower levels of phoncmics and 
morphology are unsolved It is quite true thai the higher IevcK of 
linguistic description depend on results obtained at the lower levels 
But there is also a good sense in which the converse is true. For 
example, we have seen above that it would be absurd, or even 
hopeless, to state principles of sentence construction in terms of 
phonemes or morphemes, but only the development of such higher 
levels as phrase structure indicates that this futile task need not be 

formulation of the morphophonemic rule that carnes take pa.-t intn tuk 
We will no doubt formulate this rules as 

ey --u .n the context t k * pa-'t 
in the actual morph» >ph*. ne" nc statement This will allow us to simplify the 
grammar by a generalization that will bring out the parallel between "take"- 
"took," "shake"'- "shook," "forsake '-'forsook," and more generally, "stand"'- 
"stood," etc. 



60 



SYNTACTIC STRUCTURES 



undertaken on lower levels. 9 Similarly, we have argued that des- 
cription of sentence structure by constituent analysis will be unsuc- 
cessful, if pushed beyond certain limits. But only the development 
of the still more abstract level of transformations can prepare the 
way for the development of a simpler and more adequate technique 
of constituent analysis with narrower limits. The grammar of a 
language is a complex system with many and varied interconnections 
between its parts. In order to develop one part of grammar 
thoroughly, it is often useful, or even necessary, to have some 
picture of the character of a completed system. Once again, I think 
that the notion that syntactic theory must await the solution of 
problems of phonology and morphology is completely untenable 
whether or not one is concerned with the problem of discovery 
procedures, but I think it has been nurtured by a faulty analogy 
between the order of development of linguistic theory and the 
presumed order of operations in discovery of grammatical structure. 



" See N. Chomsky, M. Halle, F. LukofT, "On accent and juncture in Eng- 
J.sh," For Roman Jakohwn i 's-Gravenhage, 1 956 1, 65 -80, f< >r a discussion of the 
possibility that considerations on all h.gher levels, including morphology, 
phrase structure, and transformations, arc relevant to the selection of a phone- 
mic analysis. 



7 



SOME TRANSFORMATIONS IN ENGLISH 



7.1 After this digression, we can return to the investigation of the 
consequences of adopting the transformational approach in the 
descn ption of English syntax Our goal is to limit the kernel in such 
a way that the terminal strings underlying the kernel sentences are 
derived by a simple system of phrase structure and can provide the 
basis from which all sentences can be derived by simple trans- 
formations: obligatory transformations in the case of the kernel, 
obligatory and optional transformations in the case of non-kernel 
sentences. 

To specify a transformation explicitly we must describe the 
analysis of the strings to which it applies and the structural change 
that it effects on these strings. 1 Thus, the passive transformation 
applies to strings of the form NP - Aux — V - NP and has the effect 
of interchanging the two noun phrases, adding by before the final 
noun phrase, and adding be + en to Aux (Cf. (34)). Consider now 
the introduction of not or tit into the auxiliary verb phrase. The 
simplest way to describe negation is by means of a transformation 
which applies before (29 n) and introduces not or nt after the second 
morpheme of the phrase given by (28m) if this phrase contains at 
least two morphemes, or after the first morpheme of this phrase if 
it contains only one. Thus this transformation T not operates on 
strings that are analyzed into three segments in one of the following 
ways: 

(37) (l) NP-C-V... 

1 h'or a more detailed discussion of the specification of transformations in 
general and of specific transformations, see the references cited in footnote 8, 
p. 44. 



62 



SYNTACTIC STRUCTURES 



(li) NP-C+M- .. 

(iii) I\r-C Vhave-. 

(iv) NP-C + be- . 

where the symbols are as in (28), (29), and it is immaterial what 
stands in place of the dots Given a string analyzed into three 
segments in one of these ways, T„, r adds not (or tit ) alter the second 
segment of the string. For example, applied to the terminal string 
they 0 + cart - come (an instance of (37n», T M , , gives they — 
0 + can + nt — come (ultimately, ''they can't come"); applied to 
they - 0 + ha\e — en [ come (an instance of (37m)), it gives they — 
0 + have -r tit — en \ come (ultimately, "they haven't come"); ap- 
plied to they — 0 -t- he — ing ■+■ come (an instance of (37 iv)), it gives 
they — 0 + be + n't — ing -r come (ultimately, "they aren't coming") 
The rule thus works properly when we select the last three cases of (37). 

Suppose, now, that we select an instance of (37 1). i e , a terminal 
string such as 

(38) John - S - come. 

which would give the kernel sentence "John comes" by (29 ii). 
Applied to (38), T^, yields 

(39) John -S + n't - come. 

But we speulied that T not applies before (29 ii). which has the effect 
of rewriting Af + vas v + .4/>. However, we see that (29ii) does not 
apply at all to C39) since (39) does not now contain a sequence 
Aj + v. Let us now add to the grammar the following obligatory 
transformational rule which applies after (29): 

(40) # Af-* #do + Af 

where do is the same element as the main \erb in "John does his 
homework". Cf. (29m) for introduction of #.) What (40) states is 
that do is introduced as the 'bearer' of an unaffixed affix. Applying 
(40) and morphological rules to (39) we derive "John doesn't come." 
The rules (37) and (40) now enable us to derive all and only the 
grammatical forms of sentence negation. 

As it stands, the transformational treatment of negation is some- 
what simpler than any alternative treatment within phrase structure. 



SOME TRANSFORMATIONS IN ENGLISH 



63 



The advantage of the transformational treatment (over inclusion of 
negatives in the kernel) would become much clearer if we could find 
other cases in which the same formulations (i.e , (37) and (40)) are 
required for independent reasons. But in fact there are such cases. 

Consider the class of ">es-or-no* questions such as "have they 
arrived", "can they arrive." "did thev arrive". We can generate all 
(and only ) these sentences by means of a transformation T q that 
operates on strings with the analysis (37), and has the effect of 
interchanging the first and second segments of these strings, as these 
segments are defined in (37). Wc require that T„ apply after (29 i) 
and before (29n). Applied to 

(41) (i) ihev 0 - arrixe 

(n) the\ 0 — can - arrhe 
(iii| the\ — O + ha\e — en arrive 
(i\ ) tin i - 0 - be - ins + arrive 

which are of the forms (37 1 -iv), T q yields the strings 

(42) 0) 0 - the\ — arrive 

(n) 0 - 1 can - they — arrive 
dii) 0 +■ ha\e — the\ — en + arrive 
(iv) 0 -i-be ■ ■ they -■ mr -i- arrive. 

Apphing to these the obligatory rules (29ii, m) and (40), and then 
the morphophonemic rules, we derive 

(43) (O do they arrive 
(lit can thev arrive 
(nit have the) arrived 
(iv) are they arriving 

in phonemic transcription. Had we applied the obligator) rules 
directly to (41), with no intervening T q , we would have derived the 
sentences 

(44) (i) thev arrive 

(ii) thev can arrive 
(in) they have arrived 
(iv) they are arriving. 

Thus (43 i — iv) are the interrogative counterparts to (44i — iv). 



64 



SYNTACTIC STRUCTURES 



In the case of (42 i). do is introduced by rule (40) as the hearer of 
the unaffixed element O. If C had been developed into S or past by 
rule (29i), rule (40) would ha\e introduced do as a bearer of these 
elements, and we would have such sentences as "does he arrive," 
"did he arrive." Note that no new morphophonemic rules are 
needed to account for the fact that do+ 0-* duw , do + 5-> do/ , 
do -r past -*■ did : we need these rules anyway to account for the 
forms of do as a main verb. Notice also that T q must apply after 
(29 i). or number will not be assigned correctly in questions 

In analyzing the auxiliary verb phrase in rules (28), (29), vse 
considered S to be the morpheme of the third person singular and 
O to be the morpheme affixed to the verb for all other forms of the 
subject. Thus the verb has S if the noun subject has 0 ("the boy 
arrives") and the verb has 0 if the subject has S ("the boys arrive*'). 
An alternative that we did not consider was to eliminate the zero 
morpheme and to state simply that no affix occurs if the subject is 
not third person singular. We see now that this alternative is not 
acceptable. We must have the 0 morpheme or there will be no 
affix in (42 i) for do to bear, and rule (40) will thus not apply to (42i). 
There are many other cases where transformational analysis provides 
compelling reasons for or against the establisment of zero mor- 
phemes. As a negative case, constder the suggestion that intrans- 
itive verbs be analyzed as verbs with zero object But then the 
passive transformation (34) would convert, e.g., "John - slept - 0" 
into the non-sentence "0 - was slept - by John" "was slept by 
John." Hence this analysis of intransitives must be rejected. We 
return to the more general problem of the role of transformations 
in determining constituent structure in t? 7 6. 

The crucial fact about the question transformation T q is that 
almost nothing must be added to the grammar in order to describe 
it. Since both the subdivision of the sentence that it imposes and 
the rule for appearance of do were required independently for 
negation, we need only describe the inversion effected by T q in 
extending the grammar to account for yes-or-no questions. Putting 
it differently, transformational analysis brings out the fact that 
negatives and interrogatives have fundamentally the same 'struc- 



SOME TRANSFORMATIONS IN ENGLISH 



65 



ture\ and it can make use of this fact to simplify the description 
of English syntax. 

In treating the auxiliary verb phrase we left out of consideration 
forms with the heavy stressed element do as in "John does come," 
etc. Suppose we set up a morpheme A of contrastive stress to which 
the following morphophonemic rule applies. 

(45) . . V. . + A . . V . where * indicates extra heavy stress. 

We now set up a transformation T A that imposes the same 
structural analysis of strings as does T noI (i.e , (37)), and adds A to 
these strings in exactly the position where T noI adds not or nt. 
Then just as T not yields such sentences as 

(46) (l) John doesn't arrive (from John # S + n't # arrive, by (40)) 
(li) John can't arrive (from John # S + can + n't # arrive) 
(iii) John hasn't arrived (from John #S+have+n , t#en + arrive) 

T A yields the corresponding sentences 

(47) (i) John does arrive (from John #5+ A # arrive, by (40)) 

(ii) John can arrive (from John # S + can + A # arrive) 

(iii) John has arrived (from John% S+ have + A %en+ arrive). 

Thus T A is a transformation of 'affirmation* which affirms the 
sentences "John arrives", "John can arrive", "John has arrived," 
etc., in exactly the same way as T n0I negates them. This is formally 
the simplest solution, and it seems intuitively correct as well. 

There are still other instances of transformations that are 
determined by the same fundamental syntactic analysis of sentences, 
namely (37). Consider the transformation T so that converts the 
pairs of strings of (48) into the corresponding strings of (49): 

(48) (i) John — S — arrive, 1—0 — arrive 

(ii) John — S -h can — arrive, I - 0 + can — arrive 
(ni) John — S + have — en + arrive, I — 0 + have — en + arrive 

(49) (i) John — S - arrive — and — so — 0 — 1 

(ii) John — S + can — arrive — and — so - 0 + can — I 

(iii) John — S + have — en + arrive — and — so — 0 + have — I. 

Applying rules (29 ii, iii), (40), and the morphophonemic rules, we 
ultimately derive 



66 



SYNTACTIC STRUCTURES 



(50) (i) John arrives and so do I 

(11) John can arrive and so can I 
(iii) John has arrived and so have I. 

T so operates on the second sentence in each pair in (48), first 
replacing the third segment of this sentence by so, and then inter- 
changing the first and third segment. (The element so is thus a 
pro-VP, in much the same sense in which he is a pronoun). The 
transformation T so combines with the conjunction transformation 
to give (49). While we have not described this in anywhere near 
sufficient detail, it is clear that both the analysis (37) of sentences 
and the rule (40) again are fundamental. Thus almost nothing new 
is required in the grammar to incorporate such sentences as (50), 
which are formed on the same underlying transformational pattern 
as negatives, questions, and emphatic affirmatives. 

There is another remarkable indication of the fundamental 
character of this analysis that deserves mention here. Consider the 
kernel sentences 

(51) (i) John has a chance to live 
(li) John is my friend. 

The terminal strings that underly (51) are 

(52) (i) John + C + have + a + chance + to + live 
(li) John 4- C + he + my + friend 

where have in (52i) and he in (52ii) are main verbs, not auxiliaries. 
Consider now how the transformations T Mon T Q and T so apply to 
these underlying strings. T Mt applies to any string of the form (37), 
adding not or rii between the second and the third segments, as 
given in (37). But (52i) is, in fact, an instance of both (37i) and 
(37 iii). Hence T not applied to (52i) will give either (53i) or (53 ii): 

(53) (i) John — C + rit — have + a + chance + to + live 

(-> "John doesn't have a chance to live") 
(ii) John — C 4- have + nt — a + chance + t o + live 

(-►"John hasn't a chance to live"). 

But in fact both forms of (53) are grammatical. Furthermore have is 
the only transitive verb for which this ambiguous negation is 



SOME TRANSFORMATIONS IN FNOLISH 



67 



possible, just as it is the only transitive verb that can be ambiguously 
analyzed in terms of (37) That is, we have "John doesn't read 
books" but not "John readsn't books". 

Similarly, T q applied to (52i) will give either form of (54), and T So 
will give either form of (55), since these transformations are also 
based on the structural analysis (37). 

(54) (i) does John have a chance to live" 7 
(li) has John a chance to hve' > 

(55) (i) Bill has a chance to live and so does John, 
(ii) Bill has a chance to live and so has John. 

But in the case of all other transitive verbs such forms as (54 ii), 
(55 ii) are impossible. We do not have "reads John books?*' or 
"Bill reads books and so reads John". We see, however, that the 
apparently irregular behavior of "have" is actually an automatic 
consequence of our rules. This solves the problem raised in § 2.3 
concerning the grammaticalness of (3) but not (5). 

Now consider (52 ii). We have not shown this, but it is in fact 
true that in the simplest phrase structure grammar of English there 
is never any reason for incorporating "be" into the class of verbs; 
i .e., it will not follow from this grammar that be is a V. Just as one 
of the forms of the verb phrase is K+A7\ one of the forms is 
be + Predicate. Hence, even though be is not an auxiliary in (52 n), 
it is nevertheless the case that of the analyses permitted by (37), only 
(37 iv) holds of (52 ii). Therefore the transformations T not , T q , and 
T*,, applied to (52n), yield, respectively (along with (29i)), 

(56) (i) John -S + be + n't- my -f friend (-► "John isn't my 

friend") 

(ii) S + be - John - my + friend (-> "is John my friend") 
(lii) Bill -S + bc-my + friend - and ~so-S V be - John 

(-* "Bill is my friend and so is John"). 

Again, the analogous forms (e.g., "John readsn't books," etc ) are 
impossible with actual verbs. Similarly, T A gives "John is here" 
instead of "John does be here", as would be the case with actual 
verbs. 

If we were to attempt to describe English syntax wholly in terms 



68 



SYNTACTIC STRUCTURES 



of phrase structure, the forms with "be" and "have" would appear 
as glaring and distinct exceptions. But we have just seen that 
exactly these apparently exceptional forms result automatically 
from the simplest grammar constructed to account for the regular 
cases. Hence, this behavior of "be" and "have" actually turns out 
to be an instance of a deeper underlying regularity when we consider 
English structure from the point of view of transformational analysis 
Notice that the occurrence of have as an auxiliary in such terminal 
strings as John + C + have + en + arrive (underlying the kernel 
sentence "John has arrived**) is not subject to the same ambiguous 
analysis. This terminal string is an instance of (37iii), but not of 
(371). That is, it can be analyzed as in (57 i), but not (57ii). 

(57) (i) John — C 4- have — en + arrive (NP — C + have — . . . , i.e., 

(37 m)) 

(ii) John - C - have + en + arrive (NP - C - V. , i.e., (37i)) 

This string is not an instance of (37 i) since this occurrence ofhaxe is 
not a V, even though certain other occurrences of have (e.g., in 
(52 i)) are V y s. The phrase structure of a terminal string is deter- 
mined from its derivation, by tracing segments back to node points in 
the manner described in § 4.1. But have in (57) is not traceable to 
any node point labelled V in the derivation of this string. (520 is 
ambiguously analyzable, however, since the occurrence of have in 
(52 i) is traceable back to a K, and of course, is traceable back to a 
have (namely, itself), in the diagram corresponding to the derivation 
of the string (52 i). The fact that (57 ii) is not a permissible analysis 
prevents us from deriving such non-sentences as "John doesn't have 
arrived", "does John have arrived", etc. 

In this section we have seen that a wide variety of apparently 
distinct phenomena all fall into place in a very simple and natural 
way when we adopt the viewpoint of transformational analysis and 
that, consequently, the grammar of English becomes much more 
simple and orderly. This is the basic requirement that any concep- 
tion of linguistic structure (i.e., any model for the form of grammars) 
must meet. I think that these considerations give ample justifi- 
cation for our earlier contention that the conceptions of phrase 



SOME TRANSFORMATIONS IN ENGLISH 



69 



structure are fundamentally inadequate and that the theory of 
linguistic structure must be elaborated along the lines suggested in 
this discussion of transformational analysis. 

7.2 We can easily extend the analysis of questions given above to 
include such interrogativcs as 

(58) (i) what did John eat 
(n) who ate an apple 

which do not receive yes-or-no answers. The simplest way to 
incorporate this class of sentences into the grammar is by setting up 
a new optional transformation T w which operates on any string of 
the form 

(59) X-NP-Y 

where X and Y stands for any string (including, in particular, the 
'null' string — i.e , the first or third position may be empty). T w 
then operates in two steps: 

(60) (i) T wl converts the string of the form X— A7> - Y into the 

corresponding string of the form NP-X- Y\ i e., it 
inverts the first and second segments of (59). It thus has 
the same transformational effect as T q (cf. (41) (42)). 
(ii) T w2 converts the resulting string NP—X — Y into who - 
X- YifNPis an animate NP or into what — X— Kit NP 
is inanimate. 2 

We now require that T w can apply only to strings to which T q has 
already applied. We specified that T q must apply after (29i) and 
before (29ii). T w applies after T q and before (29n), and il is con- 
ditional upon T q in the sense that it can only apply to forms given 
by T q . This conditional dependence among transformations is a 
generalization of the distinction between obligatory and optional 
transformations which we can easily build into the grammar, and 

* More simply, we can limit application of T„ to strings X - A7> - Y where 
NP is he, him, or it, and we can define T» 2 as the transformation that converts 
any string Z into wh + Z, where wh is a morpheme. In the morphophonemics of 
English we shall have rules* wh -he *-'huw , wh + him -> huwm , wh V it ^ 
/wat/. 



70 



SYNTACTIC STRUCTURES 



which proves essential. The terminal string underlying both (58 i) 
and (58 d) (as well as (62), (64)) is 

(61) John -C-eat + an + apple (NP - C - V...), 

where the dashes indicate the analysis imposed by T q . Thus (61) is 
a case of (37 i), as indicated. If we were to apply only obligatory 
transformations to (61). choosing past in developing C by (29i), we 
would derive 

(62) # John # eat + past # an # apple # (-* "John ate an apple") 
If we apply (29i) and T q to (61), we derive 

(63) past — John — eat + an + apple, 

where C is taken as past. If we were now to apply (40) to (63), 
introducing do as the bearer of past, we would have the simple 
interrogative 

(64) did John eat an apple 

If we apply T w to (63), however, we derive first (65), by T wl , and 
then (66), by T w2 . 

(65) John — past — eat + an + apple 

(66) who — past — eat + an + apple. 

Rule (29 ii) and the morphophonemic rules then convert (66) into 
(58 ii). To form (58 ii), then, we apply first T q and then T w to the 
terminal string (61) that underlies the kernel sentence (62). Note 
that in this case T wl simply undoes the effect of T q , which explains 
the absence of inversion in (58ii). 

To apply T w to a string, we first select a noun phrase and then 
invert this noun phrase with the string that precedes it. In forming 
(58 ii), we applied T w to (63), choosing the noun phrase John. 
Suppose now that we apply T w to (63), choosing the noun phrase 
an + apple. Thus for the purposes of this transformation we now 
analyze (63) as 

(67) past 4- John + eat — an + apple, 

a string of the form (59), where Y in this case is null. Applying T w 
to (67) we derive first (68), by T wl , and then (69), by T w2 . 

(68) an + apple — past + John + eat 



SOME TRANSFORMATIONS IN ENGLISH 



71 



(69) what — past + John + eat. 

(29 ii) does not now apply to (69), just as it did not apply to (39) or 
(42 ll, since (69) does not contain a substring of the form Af + v. 
Hence (40) applies to (69). introducing do as a bearer of the 
morpheme past. Applying the remaining rules, we finally derive 
(58 1). 

T w as formulated in (59)-(60) will also account for all such wh- 
questions as "what will he eat", "what has he been eating". It can 
easily be extended to cover interrogativcs li^e "what book did he 
read", etc. 

Notice that T wl as defined in (600 carries out the same trans- 
formation as does T q ; that is, it inverts the first two segments of the 
string to which it applies. We have not discussed the effect of 
transformations on intonation. Suppose that we set up two fun- 
damental sentence intonations: falling intonations, which we asso- 
ciate with kernel sentences, and rising intonations, which we asso- 
ciate with yes-or-no questions. Then the effect of T q is in part to 
convert the intonation from one of these to the other; hence, in the 
case of (64), to convert a falling intonation into a rising one. But we 
have seen that T w , applies only after T q , and that its transformation- 
al effect is the same as that of T q . Hence T w , will convert the rising 
intonation back into a falling one It seems reasonable to put this 
forth as an explanation for the fact that the interrogate es (58i-n) 
normally have the falling intonation of declaratives. There are 
many problems in extending our discussion to intonational pheno- 
mena and this remark is too sketchy to carry much weight, but it 
does suggest that such an extension may be fruitful. 

To summarize, we see that the four sentences 

(70) (i) John ate an apple (--(62)) 
(u) did John eat an apple ( - (64)) 
(lii) what did John eat ( — (58 1)> 
(iv) who ate an apple (— (58 ii)) 

are all derived from the underlying terminal string (61). (70i) is a 
kernel sentence, since only obligatory transformations enter into its 
'transformational history.* (70ii) is formed from (61) by applying 



72 



SYNTACTIC STRUCTURES 



T q (70in) and (70iv) are even more remote from the kernel, since 
they are formed from (61) by applying first T Q and then T w . We 
shall refer to this analysis briefly in § 8 2 

7.3 In § 5.3 we mentioned that there are certain noun phrases of 
the form to + VP, + VP ("'to prove that theorem," "proving that 
theorem"- cf. (32)-(33)). Among these we will have such phrases as 
"to be cheated," "being cheated"', which arc derived from passives. 
But passives have been deleted from the kernel Hence noun 
phrases of the type to + VP, ing -r- NP can no longer be introduced 
within the kernel grammar by such rules as (33). They must 
therefore be introduced by a 'nominalizing transformation' which 
converts a sentence of the form NP -IP into a noun phrase of the 
form to + VP or ing + VP. 3 We shall not go into the structure of 
this very interesting and ramified set of nominalizing transform- 
ations except to sketch briefly a transformational explanation for a 
problem raised in § 2.3. 

One of the nominalizing transformations will be the transform- 
ation T adJ which operates on an> string of the form 

(71) T— N -is — Adj (i.e., article — noun - is - adjective) 

and converts it into the corresponding noun phrase of the form 
T + Adj + N Thus, it converts "the boy is tall" into "the tall boy," 
etc. It is not difficult to show that this transformation simplifies the 
grammar considerably, and that it must go in this, not the opposite 
direction. When we formulate this transformation properly, we 
find that it enables us to drop all adjective-noun combinations from 
the kernel, reintroducing them by T Ad 

In the phrase structure grammar we have a rule 

(72) Adj old. tall, . 

3 This nominalizing transformation will be given as a generalized trans- 
formation such as (26). It will operate on a pair sentences, one of which it 
converts from ftP — VP into to 1 VP (or ing - VP), which it then substitutes for 
an NP of the other sentence. See my The logical structure of linguistic tha >rv and 
Transformational analysis for a detailed discussion. hor a fuller and more 
adequate analysis of the material in th.s subsection, see my "A transformational 
approach to syntax," Proceedings of the University of Texas Symposium of 1958 
(to appear). 



SOME TRANSFORMATIONS IN ENGLISH 73 

which lists all of the elements that can occur in the kernel sentences 
of the form (71). Words like "sleeping", however, will not be given 
in this list, even though we have such sentences as 

(73) the child is sleeping. 

The reason for this is that even when ''sleeping"' is not listed in (72), 

(73) is generate by the transformation (29 u) (that carries Af+ v 
into v + Af%) form the underlying terminal string 

(74) the + child + C + be- ing - sleep, 

where be + ing is part of the auxiliary verb (cf. (28 in)). Alongside of 
(73), we have such sentences as "the child will sleep," "the child 
sleeps," etc., with different choices for the auxiliary verb. 

Such words as "interesting", however, will have to be given in the 
list (73). In such sentences as 

(75) the book is interesting, 

"interesting" is an Adj, not part of the Verb, as can be seen from the 
fact that we do not have "the book will interest," "the book 
interests," etc. 

An independent support for this analysis of "interesting" and 
"sleeping" comes from the behavior of "very," etc., which can occur 
with certain adjectives, but not others. The simplest way to account 
for "very" is to put into the phrase structure grammar the rule 

(76) Adj -f very + Adj. 

"ver>" can appear in (75), and in general with "interesting"; but it 
cannot appear in (73) or with other occurrences of "sleeping," 
Hence, if we wish to preserve the simplest analysis of "very," we 
must list "interesting" but not "sleeping" in (72) as an Adj. 

We have not discussed the manner m which transformations 
impose constituent structure, although we have indicated that this 
is necessary; in particular, so that transformations can be com- 
pounded. One of the general conditions on deri\ed constituent 
structure will be the following: 

(77) If .V is a Z in the phrase structure grammar, and a string Y 
formed by a transformation is of the same structural form 
as X. then Y is also a Z. 



74 



SYNTACTIC STRUCTURE 



In particular, even when passives are deleted from the kernel we will 
want to say that the by-phrase (as in "the food was eaten — by the 
man") is a prepositional phrase (PP) in the passive sentence. (77) 
permits this, since we know from the kernel grammar that by + NP 
is a PP. (77) is not stated with sufficient accuracy, but it can be 
elaborated as one of a set of conditions on derived constituent 
structure. 

But now consider (73). The word "sleeping** is formed by trans- 
formation (i.e., (29ii)) and it is of the same form as "interesting" 
(i.e., it is a V+ ing), which, as we know from the phrase structure 
grammar, is an A dj Hence, by (77), "sleeping" is also an Adj in the 
transform (73) But this means that (73) can be analyzed as a string 
of the form (71) so that T AdJ applies to it, forming the noun phrase 

(78) the sleeping child 

just as it forms "the interesting book" from (75). Thus even though 
"sleeping" is excluded from (72), it will appear as an adjective 
modifying nouns. 

This analysis of adjectives (which is all that we are required to 
give to account for the actually occurring sentences) will not 
introduce the word "sleeping," however, into all the adjective 
positions of such words as "interesting" which remained in the 
kernel For example, it will never introduce "sleeping" into the 
context "very - -." Since "very" never modifies verbs, "very" will 
not appear in (74) or (73), and all occurences of "sleeping" as a 
modifier are derived from its occurrence as a verb in (74), etc. 
Similarly, there will be phrase structure rules that analyze the verb 
phrase into 

(79) Aux + seem + Adj 

just as other rules analyze VP into Aux + V NP, Aux ~ be — Adj. 
etc But "sleeping" will never be introduced into the context 
"seems " by this grammar, which is apparently the simplest one 
constructible for the actually occurring sentences. 

When we develop this sketchy argument more carefully, we reach 
the conclusion that the simplest transformational grammar for the 
occurring sentences will exclude (80) while generating (81). 



SOME TRANSFORMATIONS IN fcNGLlSH 



75 



(80) (i) the child seems sleeping 
(ii) the very sleeping child 

(81) (i) the book seems interesting 
(ii) the very interesting book. 

We see, then, that the apparently arbitrary distinctions noted in 
§ 2.3 between (3) (= "have you a book on modern music?") and 

(4) (= (81i)) on the one hand, and (5) (-■ "read you a book on 
modern music?") and (6) (— (80 i)) on the other, have a clear struc- 
tural origin, and are really instances of higher level regularity in the 
sense that they are consequences of the simplest transformational 
grammar. In other words, certain linguistic behavior that seems 
unmotivated and inexplicable m terms of phrase structure appears 
simple and systematic when we adopt the transformational point of 
view. To use the terminology of § 2.2, if a speaker were to project 
his finite linguistic experience by using phrase structure and trans- 
formations in the simplest possible way, consistent with his ex- 
perience, he would include (3) and (4) as grammatical while rejecting 

(5) and (6). 

7.4 In (28), § 5.3, we analyzed the element Verb into Aux + V t and 
then simply listed the \erbal roots of the class V. There are. how- 
ever, a large number of productive subscontructions of V that 
deserve some mention, since they bring to light some basic points 
in a rather clear way. Consider first such verb + particle (V+Prt) 
constructions as "bring in,** "call up," "drive away." We can have 
such forms as (82) but not (83). 

(82) (i) the police brought in the criminal 
(ii) the police brought the criminal in 
(lii) the police brought him in 

(83) the police brought in him. 

We know that discontinuous elements cannot be handled readily 
within the phrase structure grammar. Hence the most natural way 
of analyzing these constructions is to add to (28 ii) the following 
possibility: 

(84) V-* Vi+Prt 



76 



SYNTACTIC STRUCTURES 



along with a set of supplementary rules to indicate which l\ can go 
with which Prt. To allow for the possibility of (82 11) wc set up an 
optional transformation T^ p which operates on strings with the 
structural analysis 

(85) X-V x -Prt-NP 

and has the effect of interchanging the third and fourth segments of 
the string to which it applies It thuu carries (82i) into (82n). To 
provide for (82iii) while excluding (83), we must indicate that this 
transformation is obligatory when the NP object is a pronoun 
(Pron). Equivalently, we can bet up an obligatory transformation 
Tj which has the same structural effect as TJ* p but which operates 
on strings with the structural analysis 

(86) X- V x ~ Prt- Pron 

\V e know that the passive transformation operates on any string of 
the form NP - Verb - NP. If we specify that the passive transfor- 
mation applies before T£ p or T£ p , then it will form the passives 

(87) (i) the criminal was brought in by the police 
(n) he was brought in by the police 

from (82), as it should. 

Further investigation of the verb phrase shows that there is a 
general verb + complement (K+ Comp) construction that beha\es 
very much like the verb + particle construction just discussed 
Consider the sentences 

(88) everyone in the lab considers John incompetent 

(89) John is considered incompetent by everyone in the lab. 

If we wish to derive (89) from (88) by the passive transformation we 
must analyze (88) into the structure NP X — Verb - A P 2 , where N P x ^ 
e\eryone-^in-*-the + lab and NP 2 - John. That is, we must apply the 
passive not to (88). but to a terminal string (90) that underlies (88): 

(90) everyone in the lab — considers incompetent — John. 

We can now form (88) from (90) by a transformation analogous to 
T^ p . Suppose that we add to the phrase structure grammar the rule 

(91) , alongside (84) 

(91) r— r, + CVwi/> 



some transformations in English 



77 



We now extend T£ p permitting it to apply to strings of the form 
(92) as well as to strings of the form (86), as before. 

(92) X— V.-Comp-NP. 

This revised transformation T^, will convert f90j into (88). Thus, 
the treatment of the verb + complement and verb + particle con- 
structions are quite similar The former, in particular, is an extre- 
mely well-developed construction in English. 4 

7.5 We have barely sketched the justification for the particular 
form of each of the transformations that we have discussed, though 
it is very important to study the question of the uniqueness of this 
system. I think it can be shown that in each of the cases considered 
above, and in many other cases, there are very clear and easily 
gcneralizable considerations of simplicity that determine which set 
of sentences belong to the kernel and what sorts of transformations 
are required to account for the non-kernel sentences. As a paradig- 
matic instance, we shall briefly review the status of the passive 
transformation. 

In § 5.4 we showed that the grammar is much more complex if it 
contains both actives and passives in the kernel than if the passives 
are deleted and reintroduced by a transformation that interchanges 
the subject and object of the active, and replaces the verb V by 
h t- V + en + by. Two questions about uniqueness immediately 
suggest themselves. First, we ask whether it is necessary to mter- 

4 Further study shows that most of the verb complement forms introduced 
by rule (91/ should themselves be excluded from the kernel and derived trans- 
formationally from "John is incompetent," etc But this is a complex matter 
that requires a much more detailed development of transformational theory 
than we can give here Cf my The logical structure of linguistic theorv. Trans- 
formational analysts and "A transformational approach to syntax". 

There are several other features of these constructions that we have passed 
over far too briefly It is not at all c 'ear that this is an obligatory transformation. 
With long and complex objects we can have, e.g., "thev consider incompetent 
anyone who is unable to .." Hence we might extend "l"£ p , rather than T°** p . to 
take care of this case. It is interesting to study those features of the grammatical 
object that necessitate or preclude this transformation. Much more than length 
is involved There are also other possibilities for the passive that we shall not 
consider here, for lack of space, though they make an interesting study. 



78 



SYNTACTIC STRUCTURES 



change the noun phrases to form the passive. Second, we ask 
whether passives could have been chosen as the kernel, and actives 
derived from them by an 'active' transformation. 

Consider first the question of the interchange of subject and 
object. Is this interchange necessary, or could we describe the 
passive transformation as having the following effect: 

(93) NP t - Aux - V- NP 2 is rewritten NP t - Aux + be + en - 
V — by + NP 2 . 

In particular, the passive of "John loves Mary" would be "John is 
loved by Mary." 

In § 5.4 we argued against (93) and in favor of inversion on the 
basis of the fact that we have such sentences as (94) but not (95). 

(94) (i) John admires sincerity - sincerity is admired by John 
(n) John plays golf -- golf is played by John 

(hi) sincerity frightens John — John is frightened by sincerity 

(95) (l) sincerity admires John — John is admired by sincerity 
(li) golf plays John - John is played by golf 

(in) John frightens sincerity — sincerity is frightened by John. 

We pointed out, however, that this approach requires that a notion 
of "degree of grammaticalness" be developed to support this 
distinction. I believe that this approach is correct, and that there is 
a clear sense in which the sentences of (94) arc more grammatical 
than those of (95), which are themselves more grammatical than 
"sincerity admires eat,' 1 etc. Any grammar that distinguishes 
abstract from proper nouns would be subtle enough to characterize 
the difference between (94 i, iii) and (95 i, lii), for example, and surely 
linguistic theory must provide the means for this distinction. How- 
ever, since we have not gone into the question of category analysis 
in this discussion, it is interesting to show that there is even a 
stronger argument against (93). In fact, any grammar that can 
distinguish singular from plural is sufficiently powerful to enable us 
to prove that the passive requires inversion of noun phrases. 

To see this, consider the verb + complement construction dis- 
cussed in §7.4. Alongside (88), (89) we have such sentences as: 



SOME TRANSFORMATIONS IN ENGLISH 



79 



(96) all the people in the lab consider John a fool 

(97) John is considered a fool by all the people in the lab. 

In § 7 4 we saw that (96) is formed by the transformation TJjJ, from 
the underlying string 

(98) all the people in the lab consider a fool - John (NP ■ Verb - 
NP), 

with the Verb "consider a fool" being an instance of (91). We 
also saw that the passive transformation applies directly to (98). If 
the passive interchanges subject and object, it will correctly form 
(97) from (98) as the passu e of (96). If, however, we take (93) as the 
definition of the passive, we will derive the non-sentence. 

(99) all the people in the lab are considered a fool by John 

by application of this transformation to (98). 

The point is that we have found a verb — namely, "consider a 
fool" — - which must agree in number both with its subject and its 
object 5 Such verbs prove quite conclusively that the passive must 
be based on an inversion of subject and object. 

Consider now the question of whether passives could be taken as 
the kernel sentences instead of actives. It is quite easy to see that 
this proposal leads to a much more complex grammar. With actives 
as kernel sentences, the phrase structure grammar will include (28) 
with be + en dropped from (28iii). But if passives are taken as ker- 
nel sentences, be + en will have to be listed in (28 hi), along with all 
the other forms of the auxiliary, and we will have to add special rules 
indicating that if V is intransitive, it cannot have the auxiliary 
be + en (i.e., we cannot have "is occurred"), whereas if V is tran- 
sitive it must have be + en (i.e., we cannot have "lunch eats 
by John"). Comparing the two alternatives, there is no doubt as 
to relative complexity; and we are forced to take actives, not 
passives, as the kernel sentences. 

Notice that if passives were chosen as kernel sentences instead of 
actives we would run into certain difficulties of quite a different sort. 

6 The agreement between "a fool" and "John" in (98) is of course one 
support for the futher transformational analysis of the verb l- complement -+■ 
noun phrase constructions mentioned in footnote 4 on p. 77. 



80 



SYNTACTIC STRUCTURES 



The active transformation would have to apply to strings of the form 

(100) IsPy — Aux + be + en ~ V - by + A7> 2 , 

converting them to NP 2 — Aux — V — NP t . For example, it would 
convert 

(101) the wine was drunk by the guests 

into "the guests drank the wine," where "drunk" in (101) originates 
from en + drink. But there is also an adjective '"drunk" that must be 
listed in (72) along with "old," "interesting," etc , since we have 
"he is very drunk," "he seems drunk," etc. (cf. § 7. 3), and this 
adjective will also originate from en + drink. It thus appears that in 
the simplest system of phrase structure for English, the sentence 

(I02| John was drunk by midnight 

is also based on an underlying terminal string that can be analyzed 
in accordance with (100). In other words, there is no structural way 
to differentiate properly between (101) and (102), if both are taken 
as kernel sentences. But application of the 'active* transformation 
to (102) does not give a grammatical sentence. 

When we actually try to set up, for English, the simplest grammar 
that contains a phrase structure and transformational part, we find 
that the kernel consist of simple, declarative, active sentences (in 
fact, probably a f.nitc number of these), and that all other sentences 
can be described more simply as transforms. Each transformation 
that I ha\e investigated can be shown to be irreversible in the sense 
that it is much easier to carry out the transformation in one direction 
than in the other, just as in the case of the passive transformation 
discussed above. This fact may account for the traditional practice 
of grammarians, who customarily begin the grammar of English, 
for example, w ith the study of simple 'actor-action' sentences and 
simple grammatical relations such as subject-predicate or verb- 
object. No one would seriously begin the study of English con- 
stituent structure with such a sentence as "whom have they nomi- 
nated," attempting to analyze it into two parts, etc , and while some 
very detailed considerations of English structure (e.g., reference [33]) 
do not mention interrogates, none fails to include simple declara- 



SOME TRANSFORMATIONS IN ENGLISH 



81 



tives. Transformational analysis provides a rather simple explana- 
tion for this assymmetry (which is otherwise formally unmotivated) 
on the assumption that grammarians have been acting on the basis 
of a correct intuition about the language. 6 

7.6 One other point deserves some mention before we leave the 
topic of English transformations. At the outset of § 5 we noted that 
the rule for conjunction provides a useful criterion for constituent 
analysis in the sense that this rule is greatly simplified if constituents 
are set up in a certain way. Now we are interpreting this rule as a 
transformation. There are many other cases in which the behavior 
of a sentence under transformations provides valuable, even com- 
pelling evidence as to its constituent structure. 
Consider for example the pair of sentences 

(103) (i) John knew the boy studying in the library, 
(ii) John found the boy studying in the library. 

It is intuitively obvious that these sentences have different gram- 
matical structure (this becomes clear, for example, when we attempt 
to add "not running around in the streets" to (103)), but I do not 
believe that within the level of phrase structure grounds can be 
found for analyzing them into different constituents The simplest 
analysis in both cases is as NP - Verb - A 7> - mg + VP. But consider 
the behavior of these sentences under the passive transformation. 
We have the sentences (104) but not (105). 7 

* In determining which of two related forms is more central, we are thus 
following the reasoning outlined b> Bloomlield for morphology : **. . when forms 
are partially similar, there may be a question as to which one we had better take 
as the underlying form the structure of the language may decide this question 
for us, since, taking it one way, we get an unduly complicated description, and 
taking it the other way, a relatively simple one," {Language [New York, 1933], 
p 218). Bloomfield continues by pointing out that "this same consideration 
often leads us to set up an artificial underlying form." We have also found this 
insight useful in transformational analysis, as, e.g., when we set up the terminal 
string John — C have en -beting read underlying the kernel sentence 
"John has been reading." 

7 The sentences of (104 t without the parenthes.zcd expression are formed by 
a second 'elliptical' transformation that converts c g., "the boy was seen by 
John" into "the boy was seen." 



82 



SYNTACTIC STRUCTURFS 



(104) (i) the boy studying in the library was known (by John) 
(li) the boy studying in the library was found (by John) 
(iii) the boy was found studying in the library (by John) 

(105) the boy was known studying in the library (by John) 

The passive transformation applies only to sentences of the form 
HP _ verb - NP. Hence, to yield (104ii), (103ii) must be analyz- 
able as 

(106) John — found — the boy studying in the library, 

with the noun phrase object "the boy studying in the library," 
(103 i) will have a corresponding analysis, since we have the passive 
(104i). 

But (103 ii) also has the passive (I04in). From this we learn that 
(I03ii) is a case of the verb + complement construction studied in 
§7.4; i.e., that it is derived by the transformation from the 
underlying string 

(107) John - found studying in the library — the boy, 

with the verb "found" and the complement "studying in the 
library." The passive transformation will convert (107) into (104 iii), 
just as it converts (90) into (89). (103 1), however, is not a transform 
of the string "John — knew studying in the library - the boy" (the 
same form as (107)), since (105) is not a grammatical sentence. 

By studying the grammatical passives, then, we determine that 
"John found the boy studying in the library" ( = ( 103 ii) is analyzable 
ambiguously as either NP — Verb — NP, with the object "the boy 
studying in the library," or as NP — Aux + V '— NP — Comp, a 
transform of the string (107 which has the complex Verb "found 
studying in the library." "John knew the boy studying in the 
library" (=(I03i)), however, has only the first of these analyses. 
The resulting description of (103) seems quite in accord with 
intuition. 

As another example of a similar type, consider the sentence 

(108) John came home. 



SOMI- TRANSFORMATIONS IN ENGLISH 



83 



Although "John" and "home" are NP\ and "came" is a Verb, 
investigation of the effect of transformations on (108) shows that it 
cannot be analyzed as a case of NP — Verb — NP. We cannot have 
"home was tome by John" under ihe passive transformation, or 
"what did John come'* under the question transformation T w . We 
must therefore analyze (108) in some other way (if we are not to 
complicate unduly the description of these transformations), 
perhaps as NP — Verb — Adverb. Apart from such considerations 
as these, there do not appear to be very strong reasons for denying 
to 008) the completely counterintuitive analysis NP - Verb — NP, 
with "home" the object of "came". 

I think it is fair to say that a significant number of the basic 
criteria for determining constituent structure are actually trans- 
formational. The general principle is this: if we have a transform- 
ation that simplifies the grammar and leads from sentences to sen- 
tences in a large number of cases (i.e., a transformation under which 
the set of grammatical sentences is very nearly closed), then we 
attempt to assign constituent structure to sentences in such a way 
that this transformation always leads to grammatical sentences, thus 
simplifying the grammar even further. 

The reader will perhaps have noted a certain circularity or even 
apparent inconsistency in our approach. We define such trans- 
formations as the passive in terms of particular phrase structure 
analyses, and we then consider the behavior of sentences under 
these transformations in determining how to assign phrase structure 
to these sentences In $ 7.5 we used the fact that "John was drunk 
by midnight" i - i 102)) does not have a corresponding 'active' as an 
argument against setting up a passive-to-activc transformation. In 
§ 7.6 we have used the fact that "John came home" (- (108)) does 
not have a passive as an argument against assigning to it the con- 
stituent structure NP - \erb~ NP. However, if the argument is 
traced carefully in each case it w ill be clear that there is no circularity 
or inconsistency. In each case our sole concern has been to decrease 
the complexity of the grammar, and we have tried to show that the 
proposed analysis is clearly simpler than the rejected alternatives. 
In some cases the grammar becomes simpler if we reject a certain 



84 



SYNTACTIC STRUCTURES 



transformation : in some cases reassignment of constituent structure 
is preferable. We have thus been following the course outlined in 
§ 6. Making use of phrase structure and transformations, we are 
trying to construct a grammar of English that will be simpler than 
any proposed alternative; and we are giving no thought to the 
question of how one might actually arrive at this grammar in some 
mechanical way from an English corpus, no matter how extensive. 
Our weaker goal of evaluation instead of discovery eliminates any 
fear of vicious circularity in the cases discussed above. The intuitive 
correspondences and explanations of apparent irregularities seem 
to me to offer important evidence for the correctness of the approach 
we have been following. Cf. § 8. 



8 



THE EXPLANATORY POWER OF LINGUISTIC 

THEORY 



8.1 So far we have considered the linguist's task to be that of 
producing a device of some sort (called a grammar) for generating 
all and only the sentences of a language, which we have assumed 
were somehow given in advance. We have seen that this conception 
of the linguist's activities leads us naturally to describe languages in 
terms of a set of levels of representation, some of which are quite 
abstract and non-trivial. In particular, it leads us to establish phrase 
structure and transformational structure as distinct levels of 
representation for grammatical sentences. We shall now proceed to 
formulate the linguist's goals in quite different and independent 
terms which, however, lead to \ery similar notions of linguistic 
structure. 

There are many facts about language and linguistic behavior that 
require explanation beyond the fact that such and such a string 
(which no one may ever have produced) is or is not a sentence It is 
reasonable to expect grammars to provide explanations for some 
of these facts. For example, for many English speakers the ph nnemc 
sequence aneym can be understood ambiguously as either "a 
name" or "an aim". If our grammar were a one-level system deal- 
ing only with phonemes, we would have no explanation for this fact 
But when we develop the level of morphological representation, we 
find that, for quite independent reasons, we are forced to set up 
morphemes "a", "an", "aim" and ''name'", associated with the 
phonemic shapes o . an , eym and neym . Hence, as an auto- 
matic consequence of the attempt to set up the morphology in the 
simplest possible way we find that the phoneme sequence oneym is 
ambiguously represented on the morphological level. In general, 



86 



SYNTACTIC STRUCTURES 



we say that we have a case of constructional homonymitv when a 
certain phoneme sequence ts analyzed in more than one way on 
some level. This suggests a criterion of adequacy for grammars 
We can test the adequacy of a given grammar by asking whether or 
not each case of constructional homonymity is a real case of am- 
biguity and each case of the proper kind of ambiguity is actually a 
case of constructional homonymity. 1 More generally, if a certain 
conception of the form of grammar leads to a grammar of a given 
language that fails this test, we may question the adequacy of this 
conception and the linguistic theory that underlies it. Thus, a 
perfectly good argument for the establishment of a level of mor- 
phology is that this will account for the otherwise unexplained 
ambiguity of aneym . 

We have a case of constructional homonymity when some pho- 
neme sequence is ambiguously represented. Suppose that on some 
level two distinct phoneme sequences are similarly or identically 
analyzed. We expect that these sequences should somehow be 
'understood* in a similar manner, just as cases of dual represent- 
ation are 'understood* in more than one way. For example, the 
sentences 

(109) (i) John played tennis 
(ii) my friend likes music 

are quite distinct on phonemic and morphemic levels. But on the 
level of phrase structure they are both represented as \P - Verb — NP; 
correspondingly, it is evident that in some sense they are similarly 
understood. This fau could not be explained in terms of a gram- 
mar that did not go beyond the level words or morphemes, and 
such instances ofTer a motivation for establishing the level of phrase 
structure that is quite independent of that advanced in t> 3. Note 
that considerations of structural ambiguity can also be brought 

1 Obviously, not all kinds of amb iguity will be analyzable m syntacl Ic lerms. 
For example, we would not expect a grammar to explain the referential ambi- 
guity of "son" -"sun", "light" (in color, weight i, etc 

In his " Two models of grammatical description," Linguistics Today, Word 
10.210- 13 (1954), Hockett uses notions of structural ambiguity to demonstrate 
the independence of various linguistic notions in a manner very similar to what 
we are suggesting here. 



THE EXPLANATORY POWER OF LINGUISTIC THEORY 87 

forth as a motivation for establishing a level of phrase structure. 
Such expressions as "old men and women" and "they are flying 
planes" (i.e., "those specks on the horizon are . "my friends 
are ") are evidently ambiguous, and in fact they are ambiguously 
analyzed on the level of phrase structure, though not on any lower 
level Recall that the analysis of an expression on the level of 
phrase structure is provided not by a single string but by a diagram 
such as (15) or, equivalently. by a certain set of representing strings. 2 
What we are suggesting is that the notion of "understanding a 
sentence" be explained in part in terms of the notion of -linguistic 
level". To understand a sentence, then, it is first necessary to 
reconstruct its analysis on each linguistic level; and we can test the 
adequacy of a given set of abstract linguistic levels by asking whether 
or not grammars formulated in terms of these levels enable us to 
provide a satisfactory analysis of the notion of "understanding." 
Cases of higher level similarity of representation and higher level 
dissimilarity (constructional homonymity) are simply the extreme 
cases which, if this framework is accepted, prove the existence of 
higher levels In general, we cannot understand any sentence fully 
unless we know at least how it is analyzed on all levels, including 
such higher levels as phrase structure, and. as we shall see, trans- 
formational structure. 

We were able to show the inadequacy of a theory of linguistic 
structure that slopped short of phrase structure by exhibiting cases 
of ambiguity and similarity of understanding that were unexplained 
on low cr levels. But it turns out that there is still a large residue of un- 
explained cases even after the level of phrase structure is established 
and applied to Knglish. Analysis of these cases demonstrates the 

2 That is, by what is called a "phrase marker" in my The logical structure of 
linguistic theory and "Three models for the description of language" (.above, 
p. 22, fn. 1) See "Three models . " for discuss.on of (he constructional 
homonymity of "they are flying planes" within a phrase structure grammar. 
When we adjoin a transformational grammar to the phrase structure grammar, 
this sentence is, however, an example of transformational ambiguity, not 
constructional homonymity wuhin phrase structure. In fact, it is not clear that 
there are any cases of constructional homonymity purely within the level of 
phrase structure once a transformational grammar is developed. 



88 



SYNTACTIC STRUCTIRIS 



necessity for a still 'higher' level of transformational analysis in a 
manner independent of ijij 5, 7. 1 shall mention only a few re- 
presentative instances. 

8.2 In § 7 6 we came across ai example of a sentence (i.e.. "1 found 
the boy studying in the library" (ICHii)) whose ambiguity of 
representation could not be demonstrated without bringing trans- 
formational criteria to bear. We found that under one interpre- 
tation this sentence was a transform under T°* p of "I - found 
studying in the library the boy," and that under another inter- 
pretationitwasanaly7ed intoan \P — \ crb ■ ,V/>construction v ith 
the object "the boj studying in the library " Further transforma- 
tional analysis would show that in both cases the sentence is a 
transform of the pair of terminal strings that underlie the simple 
kernel sentences 

(1 10) (i) I found the boy 

(n) the boy is studving in the library. 

Hence this is an interesting case of a sentence whose ambiguity is 
the result of alternative transformational developments from the 
same kernel strings This is quite a complicated example, however, 
requiring a fairly detailed studv of the way in which transformations 
assign constituent structure, and simpler examples ot ambiguit) 
with a transformational origin are not hard to rind 

Consider the phrase (1 1 1), which can be understood ambiguously 
with "hunters" as the subject, analogous!) to (112i). or as the 
object, analogously to (I I2ii). 

(111) the shooting of the hunters 

(112) (i) the growling of lions 
(ii) the raising of flow ers. 

On the level of phrase structure there is no good wa> to explain this 

ambiguity; all of these phrases are represented as the V — 

of + NP? In transformational terms, however, there is a clear and 

■ It is true that (111) may be represented ambiguously with shoot taken 
either as a transitive or an intransitive verb, but the essential fact here is that ihe 



THF FXPLAN ATORY POWER OF LINGUIST IC THfcORY 



89 



automatic explanation. Careful analysis of English shows that we 
can simplify the grammar if we strike such phrases as (1 1 1 ) and (1 1 2) 
out of the kernel and reintroduce them by transformation. To 
account for such phrases as (112i), we will set up a transformation 
that carries any sentence of the form NP — C— V into the corre- 
sponding phrase of the form the-- V - ing -o/V.YjP; and this 
transformation will be designed in such a way that the result is an 
m\P* To account for (1 12ii), we will set up a transformation which 
carries any sentence of the form \P t - C- I — A P 2 into the corre- 
sponding . \P of the form the — V+ ing - oj + SP 2 . Thus the first of 
these transformations will carry "lions growl" into "the growling of 
lions," and the second will carry "John raises flowers" into "the 
raising of flowers." But both "the hunters shoot" and "they shoot 
the hunters" are kernel sentences. Hence (1 11) - "the shooting of 
the hunters" will have two distinct transformational origins; it will 
be ambiguously represented on the Iransformational level. The 
ambiguity of the grammatical relation in (1 1 1) is a consequence of 
the fact that the relation of "shoot" to "hunters" differs in the two 
underlying kernel sentences. W edo net ha\e this ambiguity in (\ 12). 
since neither "they growl lions" nor * flowers raise" are grammatical 
kernel sentences 
Similarly, consider such pairs as 

(1 13) \i) the picture was painted by a new technique 
tn) the picture was painted by a real artist. 

These sentences are understood quite differently though identically 
represented as \P—ua±+ \ erb i-en — bi — A/ 5 on the level of 
phrase structure But their transformational hi>tory is quite d ifferent 
(1 1 ? n ) is the passive of' a real artist painted the picture". (1130 is 



granural Cd relation in ( 1 1 1 i » amb guous <ie, "hunters'* ma> be suh-ect or 
ob eel). Grammatical relations can be defined w thin phrase structure 'n terms 
ofthc shape of the diagrams 15 , etc. But in these tern s there v.,1 benogrounds 
for the assertion thit either the s> bject-verb or the ^erb-obieu relation is to be 
found in (11 1 If we an~l>ze verbs into three classes, transit. ve, intransitive and 
e.ther transitive jr ntrans ti\e. then even this in itself insufficient i dist nction 
disappears. 
* Cf footnote 3 on p. 72. 



90 



SYNTACTIC STRUCTURES 



formed from r e.g , "John painted the picture by a new technique*" 
by a double transformation; first the passive, then the elliptical 
transformation (mentioned in fn. 7 on p. 81) that drops the 'agent' 
in the passi\e. An absolute homonym on the model of (1 13) is not 
hard to find. For example. 

(114) John was frightened by the new methods. 

may mean cither that John is a conservative new methods frighten 
him; or that new methods of frightening people were used to 
frighten John (an interpretation that would be the more normal one 
if "being" were inserted after "was"). On the transformational 
level, (114) has both the analysis of (113i) and (H3ii), which 
iccounts for its ambiguity. 

8.3 We can complete the argument by presenting an example of 
the opposite extreme; namely, a case of sentences which are under- 
stood in a similar manner, though they arc quite distinct in phrase 
structure and lower level representation. Consider the following 
sentences, discussed in §7.2. 

(115| (i) John ate an apple dcclarati\e 

(h) did John eat an apple >es-or-no-question 



It is intuitively obvious that (115) contains two types of sentences, 
declaratives (I15i) and interrogativ es (I15ii iv). Furthermore, the 
mtcrrogatives are intuitively subdivided into two types, the yes-or- 
no-question (1 15ii), and the nY/-questions (1 15m. iv). It is difficult, 
however, to find a formal basis for this classification that is not 
arbitrary and ad hoc. If, for example, we classify sentences by their 
'normal' intonation, then (115i), (115iii) and (II5iv). with the 
normal declarative (falling) intonation, will be opposed to (115ii), 
with rising intonation. If w e classify sentences on the basis of word 
order, then (115i) and (1 15 iv), with normal VP — Verb -WP order, 
will be opposed to (115u) and (H5iii). which have inversion of 
subject and auxiliary. Nevertheless, any grammar of English will 



(m) what did John eat 
(iv j who ate an apple 




interrogative 



THE EXPLANATORY POWER OF LINGUISTIC THEORY 91 

classify these sentences in the manner indicated in (115), and any 
speaker of English will understand these sentences according to this 
pattern. Certainly a linguistic theory that fails to provide grounds 
for this classification must be judged inadequate. 

The representation of a string on the level of transformations is 
given by the terminal string (or strings) form which it originates and 
the sequence of transformations by which it is derived from this 
underlying string. In §§ 7.1-2 we came to the following conclusions 
about the sentences (115) ( - (70)). Each of these sentences origin- 
ates from the underlying terminal string. 

(116) JohnC- eat an apple ( -= (61)). 

which is derived within the phrase structure grammar. (liDi) is 
derived from (116) by applying obligatory transformations only; 
hence, it is by definition a kernel sentence, (115ii) is formed from 
(116) by applying the obligatory transformations and T q . Both 
(1 15iii) and (1 15iv) are formed by applying obligatory transforma- 
tions. T q , and T w . They differ from one another only in the choice of 
the noun phrase to which T w applies Suppose that we determine 
sentence types in general in terms of transformational history, 1 e , 
representation on the transformational level. Then the major sub- 
divisions of (1 15) are the kernel sentence ( 1 15i)on the one hand, and 
(I15ii-iv), all of which have T q in their transformational repre- 
sentation, on the other. Thus (115ii-iv) are all interrogatives. 
(1 15ni-iv) form a special subclass of interrogatives, since they are 
formed by the additional subsidiary transformation T w . Thus when 
we formulate the simplest transformational grammar for (1 15), we 
find that the intuitively correct classification of sentences is given by 
the resulting transformational representations. 



9 



SYNTAX AND SEMANTICS 



9.1 We have now Sound cases of sentences thai are understood in 
more than one way and are ambiguously represented on the trans- 
formational level (though not on other levels) and cases of sentences 
that are understood in a similar manner and are similarly represent- 
ed on the transformational level alone. This gives an independent 
justification and motivation for description of language in terms of 
transformational structure, and for the establishment of trans- 
formationil representation as a linguistic level with the same fun- 
damental character as other levels. Furthermore it adds force to the 
suggestion that the process of "understanding a sentence" can be 
explained in part in terms of the notion of linguistic level In 
particular, in order to understand a sentence it is necessary to know 
the kernel sentences from which it originates (more precisely, the 
terminal strings underl>ing these kernel sentences) and the phrase 
■structure of each of these elementary components, as well as the 
transformational history of development of the given sentence from 
these kernel sentences. 1 The general problem of analyzing the 
process of "understanding" is thus reduced, in a sense, to the 
problem of explaining how kernel sentences are understood, these 
being considered the basic 'content elements' from which the usual, 
more complex sentences of real life are formed by transformational 
dev elopmcnt. 

1 W hen transformational analysis is more carefull> formulated, we find that 
knowledge of the transforma i< nal representat on of a sentence i which incor- 
porates the phrase structure of the kernel strings from which the sentence 
originates j is all that is necessary to determine the dei'ved phrase structure of 
the transform. 



SYNTAX AND SEMANTICS 



93 



In proposing that syntactic structure can provide a certain insight 
into problems of meaning and understanding we have entered onto 
dangerous ground. There is no aspect of linguistic study more 
subject to confusion and more in need of clear and careful formu- 
lation than that which deals with the points of connection between 
syntax and semantics The real question that should be asked is: 
"How are the syntactic devices available in a given language put to 
work in the actual use of this language?" Instead of being concerned 
with this very important problem, however, the study of inter- 
connections between syntax and semantics has largely been domi- 
nated by a side issue and a mislbrmulatcd question. The issue has 
been whether or not semantic information is required for discover- 
ing or selecting a grammar; and the challenge usually posed by 
those who take the affirmative in this dispute is: "How can you 
construct a grammar with no appeal to meaning?" 

The remarks in § 8 about possible semantic implications of 
syntactic study should not be misinterpreted as indicating support 
for the notion that grammar is based on meaning. In fact, the 
theory outlined in $§ 3 7 was completely formal and non-semantic. 
In § 8, we have indicated briefly some ways in which the actual use 
of available syntactic devices can be studied Perhaps this problem 
can be elucidated somewhat further by a purely negative discussion 
of the possibil ity of finding semantic foundations for syntactic theory 

9.2.1 A great deal of effort has been expended in attempting to 
answer the question: ' How can you construct a grammar with no 
appeal to meaning?" The question itself, however, is wrongly put, 
since the implication that obviously one can construct a grammar 
with appeal to meaning is totally unsupported. One might with 
equal justification ask: "How can you construct a grammar with no 
knowledge of the hair color of speakers?" The question that should 
be raised is: "How can you construct a grammar?" I am not 
acquainted with any detailed attempt to develop the theory of 
grammatical structure in partially semantic terms or any specific 
and rigorous proposal for the use of semantic information in con- 
structing or evaluating grammars. It is undeniable that "intuition 



94 



SYNTACTIC STRUCTl'RtS 



about linguistic form" is very useful to the investigator oflinguistic 
form (i c, grammar). It is also quite clear that the major goal of 
grammatical theory is to replace this obscure reliance on intuition 
by some rigorous and objective approach. There is, however, little 
evidence that "intuition about meaning* is at all useful in the 
actual investigation oflinguistic form. I believe that the inadequacy 
of suggestions about the use of meaning in grammatical analysis 
faiU to be apparent only because of their vagueness and because of 
an unfortunate tendency to confuse "intuition about linguistic 
form" with -'intuition about meaning," two terms that have in 
common only their vagueness and their undesirabihty in linguistic 
theory. However, because of the widespread acceptance of such 
suggestion, it may be worthwhile to investigate some of them 
briefly, even though the burden of proof in this case rests completely 
on the linguist who claims to have been able to develop some gram- 
matical notion in semantic terms. 

9.2.2 Among the more common assertions put forth as supporting 
the dependence of grammar on meaning we have the following: 

(117) (i) two utterances are phonemically distinct if and only if 
they differ in meaning; 
(n) morphemes are the smallest elements that have meaning; 
(m) grammatical sentences are those that have semantic 
significance; 

(iv) the grammatical relation subject-verb (i.e., ,\P- VP as 
an analysis of Sentence) corresponds to the general 
'structural meaning' actor-action; 

(v) the grammatical relation verb-object (i.e.. Verb — NP as 
an analysis of VP) corresponds to the structural meaning 
action-goal or action-object of action; 

(vi) an active sentence and the corresponding passive are 
synonymous. 

9.2.3 A great many linguists have expressed the opinion that 
phonemic distinctness must be defined in terms of differential 
meaning (synonymity, to use a more familiar term), as proposed in 



SYNTAX AND SEMANTICS 



95 



(117i). However, it is immediately evident that (117i) cannot be 
accepted, as it stands, as a definition of phonemic distinctness. 2 If 
we are not to beg the question, the utterances in question must be 
tokens, not types But there are utterance tokens that are phonemic- 
ally distinct and identical in meaning (synonyms) and there are 
utterance tokens that arc phonemically identical and different in 
meaning (homonyms) Hence (I17i) is false in both directions. 
From left to right it is falsified by such pairs as "bachelor" and 
"unmarried man," or, even more seriously, by such absolute 
synonyms as eklnamiks and lykinamiks ("economics"), "adult" 
and "adult," raSin and reysin , ("ration"), and many others, 
which may coexist even within one style of speech. From right to 
left. (1 17i) is falsified by such pairs as * l bank" (of a river) and "bank" 
(for savings), 3 "metal" and "medal" (in many dialects), and 
numerous other examples. In other words, if we assign two utter- 
ance tokens to the same utterance type on the basis of (117i), we 
will simply get the wrong classification in a large number of cases. 

A weaker claim than (117i) might be advanced as follows. 
Suppose that we have an absolute phonetic system given in advance 
of the analysis of any language, and guaranteed to be detailed 
enough so that every two phonemically distinct utterances in any 
language will be differently transcribed. It may now be the case that 
certain different tokens will be identically transcribed in this 
phonetic transcription. Suppose that we define the "ambiguous 
meaning" of an utterance token as the set of meanings of all tokens 
transcribed identically with this utterance token. We might now 
re\isc (117i), replacing "meaning" by "ambiguous meaning." This 
might pro\ide an approach to the homonymity problem, if we had 
an immense corpus in which we could be fairly sure that each of the 

* See my "Semantic considerations in grammar," Monograph no. 8, pp. 
141 53 (1955), for a more detailed investigation of (U7i>. 

* Note that we cannot argue that "bank" in "the river bank" and "bank" 
in "the savings bank" are two occurrences of the same word, since this is 
precisely the question under investigation. To say that two utterance tokens 
are occurrences of the same word is to say that they arc not phonemically 
distinct, and presumably this is what the synonymity criterion ( 1 1 7i) is supposed 
to determine for us. 



96 SYNTACTIC STRUCTURES 

phonetically distinct forms of a given word occurred with each of 
the meanings that this word might have. It may be possible to 
elaboratei this approach even further to cope with the problem cf 
synonyms. In such a way one might hope to determine phoncmo 
distinctness by laborious investigation of the meanings of phonetic- 
allytranscnbed item* in a vast corpus The difficulty of determining 
in any precise and realistic manner how many meanings several 
items may have in common, however, as well as the vastness of the 
undertaking, make the prospect for any such approach appear 
rather dubious. 

9.2.4 Fortunutelv, we do not have to pursue any such far-fetched 
and elaborate program in order to determine phonemic distinctness. 
In practice, every linguist uses much more simple and straight- 
forward non-semantic devices. Suppose that a linguist is interested 
in determining whether or not "metal" and "medal" are phonemic- 
ally distinct in some dialect of English. He will not investigate the 
meanings of these words, since this information is clearly irrelevant 
to his purpose He knows that the meanings are different (or he is 
simply not concerned with the question) and he is interested in 
determining whether or not the words are phonemically distinct. 
A careful field worker would probably use the pair test, 4 either with 
two informants or with an informant and a tape recorder. For 
example, he might make a random sequence of copies of the 
utterance tokens that interest him, and then determine whether or 
not the speaker can consistently identify them. If there is consistent 
identification, the linguists may apply an even stricter test, asking the 
speaker to repeat each word several times, and running the pair test 
over again on the repetitions. If consistent distinguishabihty is 
maintained under repetition, he will ^ay that the words "metal" and 
"medal" are phonemically distinct The pair test with its variants 

4 Cf. my "Semantic considerations of grammar," Monograph no. 8 t pp. 
141 -54(1955), M. Halle, "The strategy of phoncmics," Linguistics Today, Word 
10 197 -209 (1954), Z S- Hams, Methods in structural linguistics (Chicago, 
1951), pp 32f , C F. Hockctt, A manual of phonology Memoir II, Indiana 
University Publications m Anthropology and Linguistics (Baltimore, 1955), p. 146. 



SYNTAX AND SEMANTICS 



97 



and elaborations provides us with a clear operational criterion for 
phonemic distinctness in completely non-semantic terms. 5 

It is customary to view non-semantic approaches to grammar as 
possible alternatives to semantic approaches, and to criticize them 
as too complex, even if possible in principle. Wc have found, how- 
ever, that in the case of phonemic distinctness, at least, exactly the 
opposite is true. There is a fairly straightforward and operational 
approach to the determination of phonemic distinctness in terms of 
such non-semantic devices as the pair test. It may be possible in 
principle to develop some semantically oriented equivalent to the 
pair test and its elaborations, but it appears that any such procedure 
will be quite complex, requiring exhaustive analysis of an immense 
corpus, and involving the linguists in the rather hopeless attempt to 
determine how many meanings a given phone sequence might have. 

5 Lounsbury argues in his "A semantic analysis of the Pawnee kinship usage," 
Language 32.158-94 (1956), p. 190, that appeal to synonymity is necessary to 
distinguish between free var.ation and contrast. **Jf a linguist who knows no 
English records from my hps the word cat first with a final aspirated stop and 
later with a final preglottalized unreleased stop, the phonetic data will not tell 
him whether these forms contrast or not. It is only when he asks me, his 
informant, whether the meaning of the first form is different from that of the 
second, and I say it is not, that he will be able to proceed with his phonemic 
analysis." As a general method, this approach is untenable. Suppose that the 
linguist records ckmamiks and lyktnamiks,, viksin and fiymeyl -£ faks .etc., 
and asks whether or not they are different in meaning. He will learn that they 
are not, and will incorrectly assign them the same phonem ic analysis, if he takes 
this posuion literally. On the other hand, there are many speakers who do not 
distinguish "metal" from "medal," though if asked, they may be quite sure that 
they do. The responses of such informants to Lounsbury's direct question about 
meaning would no doubt s.mply becloud the issue 

We can make Lounsbury's position more acceptable by replacing the question 
"do they have the same meaning?" with "are they the same word?" This will 
avoid the pitfalls of the essentially irrelevant semantic question, but it is hardly 
acceptable in this form, since it amounts to asking the informant to do the 
linguist's work, it replaces an operational test of behavior (.such as the pair test) 
by an informant's judgment about his behavior. The operational tests for 
linguistic notions may require the informant to respond, but not to express his 
opinion about his behavior, his judgment about synonymy, about phonemic 
distinctness, etc. The informant's opinions may be based on aU sorts of irrele- 
vant factors. This is an important distinction that must be carefully observed if 
the operational basis for grammar is not be trivialized. 



98 SYNTACTIC STRUCTURES 

9.2.5 There is one further difficulty of principle that should be 
mentioned in the discussion of any semantic approach to phonemic 
distinctness We have not asked whether the meanings assigned to 
distinct (but phonemicaliy identical) tokens are identical, or merely 
very similar. If the latter, then all of the difficulties of determining 
phonemic distinctness are paralleled (and magnified, because of the 
inherent obscurity of the subject matter) in determining sameness of 
meaning. We will have to determine when two distinct meanings 
are sufficiently similar to be considered 'the same.' If, on the other 
hand, we try to maintain the position that the meanings assigned 
are always identical, that the meaning of a word is a fixed and 
unchanging component of each occurrence, then a charge of 
circularity seems warranted. It seems that the only way to uphold 
such a position would be to conceive of the meaning of a token as 
"the way in which tokens of this type are (ui can be) used," the class 
of situations in which they can be used, the type of response that 
they normally evoke, or something of this sort. But it is difficult to 
make any sense at all out of such a conception of meaning without a 
prior notion of utterance type It would appear, then, that even 
apart from our earlier objections, any approach to phonemic 
distinctness in semantic terms is either circular or is based on a 
distinction that is considerably more difficult to establish than the 
distinction it is supposed to clarify. 

9.2.6 How, then, can we account for the widespread acceptance of 
some such formulation as (117i)? I think that there are two ex- 
planations for this. In part, it is a consequence of the assumption 
that semantic approaches are somehow immediately given and are 
too simple to require analysis. Any attempt to provide a careful 
description, however, quickly dispels this illusion. A semantic 
approach to some grammatical notion requires as careful and 
detailed a development as is justly required of any non-semantic 
approach. And as we have seen, a semantic approach to phonemic 
distinctness is beset by quite considerable difficulties. 

A second source for such formulations as (I17i), I believe, is a 
confusion of "meaning** with "informant's response." We thus 



SYNTAX AND SEMANTICS 



99 



find such comments on linguistic method as the following: "In 
linguistic analysis we define contrast among forms operationally in 
terms of difference in meaning responses/" 6 We have observed in 
§ 9.2 3 that if we were to determine contrast by 'meaning response" in 
any direct way we would simply make the wrong decision in a great 
many places; and if we try to avoid the difficulties that immediately 
arise, we are led to a construction that is so elaborate and has such 
intolerable assumptions that it can be hardly taken as a serious 
proposal. And we saw in § 9.2 5 that there are apparently even 
more fundamental difficulties of principle. Hence, if we interpret 
the quoted asseriation literally, we must reject it as incorrect. 

If we drop the word "meaning*' from this statement, howe\er, we 
ha\e a perfectly acceptable reference to such devices as the pair 
test. But there is no warrant foi interpreting the responses studied 
in the pair test as semantic in any way. 7 One might very well 
develop an operational test for rhyme that would show that "bill" 
and "pill" are related in a way in which "bill" and "ball" are not. 
There would be nothing semantic in this test. Phonemic identity is 
essentially complete rhyme, and there is no more reason for 
postulating some unobserved semantic reaction in the case of 
"bill" and "bill" than in the case of *'bill" and "pill " 

It is strange that those who have objected to basing hnguisitc 
theory on such formulations as (1 17i) should have been accused of 
disregard for meaning. It appears to be the case, on the contrary, 
that those who propose some variant of (1 17i) must be interpreting 
"meaning" so broadly that any response to language is called 
"meaning." But to accept this view is to denude the term "mean- 
ing" of any interest or significance. I think that anyone who wishes 
to save the phrase "study of meaning" as descriptive of an important 

* F Lounsbury, "A semantic analysis of the Pawnee kinship usage". Lan- 
guage 32.158 94 (1956), p. 191. 

' One should not be confused b> the fact that the subject in the pair test 
may be asked to identify the utterance tokens by meaning. He might just as well 
be asked to identify them by arbitrarily chosen numbers, by s gns of the zodiac, 
etc We can no more use some particular formula! ion of the pair test as an 
argument for dependence of grammatical theory on meaning than as an argu- 
ment that linguistics is based on arithmetic or astrology. 



100 



SYNTACTIC STRUCTURES 



aspect of linguistic research must reject this identification of 
"meaning" with "response to language,'* and along with it, such 
formulations as (1 17i). 

9.2.7 It is, of course, impossible to prove that semantic notions 
are of no use in grammar, just as it is impossible to prove the 
irrelevance of any other given set of notions. Investigation of such 
proposals, however, invariably seems to lead to the conclusion that 
only a purely formal basis can provide a firm and productive 
foundation for the construction of grammatical theory. Detailed 
investigation of each semantically oriented proposal would go 
beyond the bounds of this study, and would be rather pointless, but 
we can mention briefly some of the more obvious counterexamples 
to such familiar suggestion as (117). 

Such morphemes as "to** in "I want to go" or the dummy carrier 
"do" in "did he come?" (cf. §7.1) can hardly be said to have a 
meaning in any independent sense, and it seems reasonable to 
assume that an independent notion of meaning, if clearly given, 
may assign meaning of some sort to such non-morphemes as gl- in 
"gleam," "glimmer," "glow." 8 Thus we have counterexamples to 
the suggestion (117ii) that morphemes be defined as minimal 
meaning-bearing elements. In §2 we have given grounds for 
rejecting "semantic significance" as a general criterion for gram- 
maticalness, as proposed in (117iii). Such sentences as "John re- 
ceived a letter" or "the fighting stopped" show clearly the unten- 
ability of the assertion (117iv) that the grammatical relation 
subject-verb has the 'structural meaning' actor-action, if meaning is 
taken seriously as a concept independent of grammar. Similarly, 
the assignment (1 17 v) of any such structural meaning as action-goal 
to the verb-object relation as such is precluded by such sentences as 
"I will disregard his incompetence" or "I missed the train.*" In 
contradiction to (1 17vi), we can describe circumstances in which a 
'quantificational* sentence such as "everyone in the room knows at 

8 Sec L. Bloomfield, Language (New York, 1933), p. 156; Z- S. Hams, 
Methods in structural linguistics (Chicago, 1951), p. 177; O. Jespersen, Language 
(New York, 1922), chapter XX, for many further examples. 



SYNTAX AND SEMANTICS 



101 



least two languages" may be true, while the corresponding passive 
"at least two languages are known by everyone in the room" is false, 
under the normal interpretation of these sentences - - e.g., if one 
person in the room knows only trench and German, and another 
only Spanish and Italian. This indicates that not even the weakest 
semantic relation (factual equivalence) holds in general between 
active and passive. 

9.3 These counterexamples should not. however, blind us to the 
fact that there arc striking correspondences between the structures 
and elements that are discovered in formal, grammatical analysis 
and specific semantic functions None of the assertions of (117) 
is wholly false; some arc very nearly true. It seems clear, then, that 
undeniable, though only imperfect correspondences hold between 
formal and semantic features in language. The fact that the cor- 
respondences are so inexact suggests that meaning will be relatively 
useless as a basis for grammatical description 9 Careful analysis of 
each proposal for reliance on meaning confirms this, and shows, in 
fact, that important insights and generalizations about linguistic 
structure may be missed if vague semantic clues are followed too 
closely. For example, we have seen that the active-passive relation 
is just one instance of a very general and fundamental aspect of 
formal linguistic structure. The similarity between active-passive, 
negation, declarative-interrogative, and other transformational 
relations would not ha\e come to light if the active-passive relation 
had been investigated exclusively in terms of such notions as 
synonymity. 

8 Another reason for suspecting that grammar cannot be effecmely Ue\ el- 
oped on a semantic basis was brought out in the particular case of phonemic 
distinctness in § 9 2 5. More generally, it scerm that the study of meaning is 
fraught with so many ditlicu1t.es even after the linguistic meaningbeanng 
elements and their relations arc speeded, that any attempt to study meaning 
independently of such specification .s out of lac question. To put it differently, 
given the instrument language and its formal devices, we can and should in- 
vestigate their semantic function (as, e.g., in R. Jakobson. "'Bcitrag zur all- 
gememen kaiuslchrc," Traiaux du tercle Lir.gui-tique de Prae:te 6 240 88 
(1936;), but we cannot, apparently, hnd semantic absolutes, known in advance 
of grammar, that can be used to determine the objects of grammar in any way. 



102 



SYNTACTIC STRUCT URES 



The Fact that correspondences between formal and semantic 
teatures exist, however, cannot be ignored. These correspondences 
should be studied in some more general theory of language that will 
include a theory of linguistic form and a theory of the use of 
language as subparts. In ij 8 we iVund that there are, apparently, 
fairl) general types of relations between these two domains that 
desene m.-ie intensive study. Having determined the syntactic 
structure of the language, we can study the way in which this 
syntactic structuie i> put to u->e in the actual functioning of lan- 
guage An investigation of thescmanK function of level stiuetare, 
as suggested briefl) in m:ght be a reasonable step towards a 
theory of the interconnections between syntax and semantics In 
fact, we pointed out in § fc that the correlations between the form 
and use of language can even provide certa.n rough criteria of 
adequacy for a -ingui^tic theory and the grammars to which it leads. 
We can ; udgc formal theories in terms of their ability to explain and 
clarify a variety of facts about the way in which sentences are used 
and understood. In other words, we should like the syntactic 
framework of the language that is isolated and exhibited b) the 
grammar to be able to support semantic description, and we shall 
naturally rate more highly a theory of formal structure that leads 
to grammars that meet this requirement more fully. 

Phrase structure and transf ormatn >nal structure appear to prov ide 
the major syntactic devices a\ailable in language for organization 
and expression of content T he grammar of a given language must 
show how these abstiact structures are actually realized in the case 
of the language in question, while linguistic theory must seek to 
clarify these foundations for grammar jnd the methods tor 
evaluating and choosing between proposed grammar-. 

It is important to recogni/e that by introducing such considera- 
tions as those of § X into the mctatheory that deals with grammar 
and semantics and their points of connection, we have not altered 
the purely formal character of the theory of grammatical structure 
itself- In §§3 7 vvc outlined the develt pmenl of some fundamental 
linguistic concepts in purely formal terms. We considered the 
problem of syntactic research to be that of constructing a device 



SYNTAX AND SfcMANTICS 



103 



tor producing a given set of grammatical sentences and of studying 
the properties of grammars that do this effectively. Such semantic 
notions as reference, significance, and synonymity played no role in 
the discussion The outlined theorv, of course, had serious gaps in 
it - - in particular, the assumption that the set of grammatical 
sentences is given in advance is clearly too strong, and the notion ot 
"simplicity" to which appeal was made explicitly or tacitly was left 
unanalyzed. However, neither these nor other gaps in this develop- 
ment of grammatical theory can be tilled in or narrowed, to my 
knowledge, by constructing this theor> on a partially semantic 
basis. 

In ^ 3 7, then, we were studying language as an instrument or a 
tool, attempting to describe its structure with no explicit rcleience 
to the way in which this instrument is put to use. The moti\ation 
for this self-imposed formality requirement for grammars is quite 
simple— there seems to be no other basis that will yield a rigorous, 
effective, and 'revealing* theory of linguistic structure. The require- 
ment that this theory shall be a completely formal discipline is 
perfectly compatible with the desire to formulate it in such a way as 
to have suggestive and significant interconnections with a parallel 
semantic theory. What we have pointed out in § 8 is that this lormal 
study of the structure of language as an instrument may be expected 
to provide insight into the actual use of language, i.e . into the 
process of understanding sentences. 

9.4 To understand a sentence we must know much more than the 
analysis of this sentence on each linguistic level. We must also 
know the reference and meaning 10 of the morphemes or words of 

Iu Goodman has argued — to my mind, quite convincingly— that the notion 
of meaning of words can at least in part be reduced to that of reference of 
expressions containing these words. See N. Goodman, "On likeness of meaning," 
Analysis, vol 10, no. 1 (1949), idem, 'On some differences about meaning." 
Analysis, vol. 13, no. 4 (1953)- Goodman's approach amounts to reformulating 
a pan of the theory of meaning in the much clearer terms of the theory of 
reference, just as much of our discussion can be understood as suggesting a 
reformulation of parts of the theory of meaning that deal with so-called 'struc- 
tural meaning" in terms of the completely nonsemantic theory of grammatical 
structure. Part of the difficulty with the theory of meaning is that "meaning" 



104 



SYNTACTIC STRUCTURES 



which it is composed . naturally, grammar cannot be expected to be 
of much help here. 1"hesc notions form the subiect matter for 
semantics. In describing the meaning of a word it is otten expe- 
dient, or necessary, to reler to the syntactic framework in which this 
word is usually embedded; eg . in describing the meaning of "hit " 
we would no doubt describe the agent and object ot the action in 
terms of the notions "subject" and "object", which are apparently 
best analyzed as purely formal notions belonging to the theory of 
grammar. 11 We shall naturally Imd that a great many words or 
morphemes ol a single grammatical category are described seman- 
tically in partially similar terms, e g verbs in terms ot subject and 
object, etc This is not surprising; it means that the syntactic 
dev ices a\ailable in the language arc being used fairly systematically. 
We ha\e seen, however, that so generalize Irom this fairly syste- 
matic use and to assign 'structural meanings* to grammatical 
categories or constructions just as 'lexical meanings" arc assigned to 
words or morphemes, is a step of very questionable validity. 

Another common but dubious u^c of the notion 'structural 
meaning' is with reference to the meanings ol -vO-called 'grammatic- 
ally functioning* morphemes such as ///if. ly. propositions, etc. The 
contention that the meanings of these morphemes arc fundament- 
ally different from the meanings of nouns, verbs, adjectives, and 
perhaps other large classes, is often supported by appeal to the fact 
that these morphemes can be distributed m a sequence of blanks or 
nonsense syllables so as to give the whole the appearance of a 
sentence, and in fact, so as to determine the grammatical category of 
the nonsense elements. For example, in the sequence "Ptrots 
karulize etahcally" we know that the three words are noun, verb, 
and adverb by virtue of the s, ize, and />. respectively. But this 

tends to be used as a catch-all term to include e\er> aspect of language that we 
know very little about. Insofar as th:s is correct, we can expect \anous aspects 
of this theory to be claimed by other approaches to language in the course of 
their development- 

11 Such a description of the meaning of " hit" would then account automatic- 
ally for the use of "hit" in such transforms as "Bill was hit by John." "hitting 
Bill was wrong," etc., if we can show in sufficient detail and generality that 
transforms are 'understood* in terms of the underlying kernel sentences. 



SYNTAX AND SEMANTICS 



105 



property does not sharply distinguish 'grammatical* morphemes 
from others, since in such sequences as k "thc Pirots karul — yester- 
day" or "give him — water" the blanks are also determined as a 
variant of past tense, in the first case, and as "the", "some," etc., but 
not "a,** in the second. The fact that in these cases we were forced 
to give blanks rather than nonsense words is explained by the 
productivity or 'open-endedness' of the categories Noun, Verb, 
Adjective, etc , as opposed to the categories Article, Verbal MTix, 
etc. In general, when we distribute a sequence of morphemes in a 
sequence of blanks we limit the choice of elements that can be 
placed in the unfilled pobitions to form a grammatical sentence. 
Whatever differences there are among morphemes with respect to 
this property are apparently better explained in terms of such 
grammatical notions as productivity, freedom of combination, and 
size of substitution clas*; than in terms of any presumed feature of 
meaning. 



10 



SUMMARY 



In this discussion we ha\e stressed the following points: The most 
that can reasonably be expected of linguistic theory is that it 
shall provide an evaluation procedure for grammars. The theory 
of linguistic structure must be distinguished clearly from a manual 
of helpful procedures for the discovery of grammars, although 
such a manual will no doubt draw upon the results of linguistic 
theory, and the attempt to develop such a manual will probably 
(as it has in the past) contribute substantially to the formation 
of linguistic theory. If this viewpoint is adopted, there is little 
motivation for the objection to mixing levels, for the concep- 
tion of higher-level elements as being literally constructed out of 
lower-level elements, or for the feeling that syntactic work is 
premature until all problems of phonemics or morphology are 
soh ed. 

Grammar is best formulated as a self-contained study indepen- 
dent of semantics. In particular, the notion of grammaticalness can- 
not be identified with meaningfulness (nor does it have any special 
relation, even approximate, to the notion of statistical order of 
approximation). In carrying out this independent and formal study, 
we find that a simple mode! of language as a finite state Markov 
process that produces sentences from left to right is not acceptable, 
and that such fairly abstract linguistic levels as phrase structure and 
transformational structure are required for the description of 
natural languages. 

We can greatly simplify the description of English and gain new 
and important insight into its formal structure if we limit the direct 
description in terms of phrase structure to a kernel of basic sen- 



SUMMARY 



107 



tences (simple, declarative, active, with no complex verb or noun 
phrases), deriving all other sentences from these (more properly, 
from the strings that underlie them) by transformation, possibly 
repeated. Conversely, having found a set of transformations that 
carry grammatical sentences into grammatical sentences, we can 
determine the constituent structure of particular sentences by 
investigating their behavior under these transformations with alter- 
native constituent analyses. 

We consequently view grammars as having a tripartite structure. 
A grammar has a sequence of rules from which phrase structure can 
be reconstructed and a sequence of morphophonemic rules that 
convert strings of morphemes into strings of phonemes. Connect- 
ing these sequences, there is a sequence of transformational rules 
that carry strings with phrase structure into new strings to which 
the morphophonemic rules can apply. The phrase structure and 
morphophonemic rules are elementary in a sense in which the 
transformational rules are not. To appl> a transformation to a 
string, we must know some of the history of derivation of this string; 
but to apply non-transformational rules, it is sufficient to know the 
shape of the string to which the rule applies. 

As an automatic consequence of the attempt to construct the 
simplest grammar for English in terms of the abstract levels devel- 
oped in linguistic theory we find that the apparently irregular 
behavior of certain words (e g., "have," "be," "seem") is really a 
case of higher level regularity. We also find that many sentences 
are assigned dual representations on some level, and many pairs of 
sentences are assigned similar or identical representations on some 
level. In a significant number of cases, dual representation (con- 
structional homonymity) corresponds to ambiguity of the re- 
presented sentence and similar or identical representation appears 
in cases of intuitive similarity of utterances. 

More generally, it appears that the notion of "understanding a 
sentence" must be partially analyzed in grammatical terms. To 
understand a sentence it is necessary (though not, of course, 
sufficient) to reconstruct its representation on each level, including 
the transformational level where the kernel sentences underlying a 



108 



SYNTACTIC STRUCTURES 



given sentence can be thought of, in a sense, as the 'elementary 
content elements' out of which this sentence is constructed. In other 
words, one result of the formal study of grammatical structure is 
that a syntactic framework is brought to light which can support 
semantic analysis. Description of meaning can profitably refer to 
this underlying syntactic framework, although systematic semantic 
considerations are apparently not helpful in determining it in the 
first place. The notion of "structual meaning" as opposed to 
"lexical meaning", however, appears to be quite suspect, and it is 
questionable that the grammatical devices available in language are 
used consistently enough so that meaning can be assigned to them 
directly. Nevertheless, we do find many important correlations, 
quite naturally, between syntactic structure and meaning; or, to put 
it differently, we find that the grammatical devices are used quite 
systematically. These correlations could form part of the subject 
matter for a more general theory of language concerned with 
syntax and semantics and their points of connection. 



11 



Appendix I 
NOTATIONS AND TERMINOLOGY 

In this appendix we shall present a brief account of the new or less 
familiar notational and terminological conventions that we have 
used 

A linguistic level is a method of representing utterances. It has a 
finite vocabulary of symbols (on the phonemic level, wc call this 
vocabulary the alphabet of the language) which can be placed in a 
linear sequence to form strings of symbols by an operation called 
concatenation and symbolized by 4- Thus on the morphemic level 
in English we have the vocabulary elements the, boy, S, past y come, 
etc., and we can form the string the + boy + S+ come + past (which 
would be carried by the morphophoncmic rules into the string of 
elements 6ib5yz # keym . ) representing the utterance "the boys 
came " Apart form the phonemic level, we have used italics or 
quotes for vocabulary symbols and strings of representing symbols; 
on the phonemic level we suppress the concatenation symbol + and 
use the customary slant lines, as in the example just given. We use 
A\ V, Z, H as variables over strings. 

Occasionally we use a hyphen instead of the plus sign, to sym- 
bolize concatenation. We do this to call special attention to a 
subdivision of the utterance with which we happen to be particularly 
concerned at the moment. Sometimes we use wider spacing for the 
same purpose. Neither of these notational devices has any syste- 
matic significance; they are introduced simply for clarity in ex- 
position. In the discussion of transformations, we use the hyphen 
to indicate the subdivision of a string that is imposed by a certain 
transformation. Thus when we assert that the question trans- 
formation T q applies in particular to a string of the form 



no 



SYNTACTIC STRUCTURES 



(118) NP-have-en+ V (cf. (37iii)) 

inverting the first two segments, we mean that it applies, for 
example, to 

(1 19) they — have — en + arrixe. 

since they is an NP and arrive is a V in this string. The transform in 
this case will be 

( 1 20) have — they — en + arrive, 

ultimately, "have they arrived?"* 

A rule of the form X-> Y is to be interpreted as the instruction 
"rewrite JTas Y" where Yand rare strings. We use parentheses to 
indicate that an element may or may not occur, and brackets (or 
listing) to indicate choice among elements. Thus both the rules 
(121 i) and (121 ii) 

(121) (i) a^b{c) 
b + c 



(ii > n 
a 



b 



are abbreviations for the pair of alternatives: o-»6 + r, a-*b. 

The following list gives the page references for the first occurrence 
of the special symbols other than those mentioned above. 

(122) 



NP 


p. 26 


S 


P- 39 


VP 


p. 26 


0 


p. 39 


T 


p. 26 


past 


p. 39 


N 


p. 26 


Af 


p. 39 


NP sinft 


p. 28 


# 


p. 39 


NP pl 


p. 29 


A 


p 65 


[£, H 


p. 29 


hA 


p. 69, fn. 2 


Aux 


p. 39 


Adj 


p. 72 


V 


p. 39 


PP 


p. 74 


c 


p. 39 


Prt 


p. 75 


M 


p. 39 


Comp 


p. 76 


en 


p. 39 







12 



Appendix J J 

EXAMPLES OF ENGLISH PHRASE STRUCTURE 
AND TRANSFORMATIONAL RULES 

We collect here for ease of reference the examples of rules of English 
grammar that played an important role in the discussion. The 
numbers to the left give the proper ordering of these rules, assuming 
that this sketch is the outline of a grammar of the form (35). The 
parenthesized number to the right of each rule is the number of this 
rule in the text. Certain rules ha\e been modified from their forms 
in the text in the light of subsequent decision or for more systematic 
presentation. 



Phrase Structure: 
I: # Sentence # 

F: 1. Sentence -* NP -»- VP (13 1) 

2 VP ->Verb+NP (13iii) 

3. NP ^\NP sln A (p.29,fn.3) 

4. \P s(ng ->T+ ,Vt 0 (P-29. fn.3) 

5. NP pl ^T+A'+5 (p. 2% fn.3) 
6- T -*the (13iv) 

7. A* ->man, ball, etc. (13v) 

8. Verb ->Aux+V (28i) 

9. V ->hit, take, walk, read y etc. (28 li) 

10. Aux -> C( V/) [ha\e -+- en) (be + ins) (28nrt 

11. M -*k///, can, mav, shali must (28iv) 



Transformational Structure: 

A transformation is defined by the structural analysis of the strings 
to which it applies and the structural change that it effects on these 
strings. 



112 



SYNTACTIC STRUCTURES 



12. 



13. 



15. 



16. 



17. 



18. 



19. 



Passhe — optional: 

Structural anal} sis: SP-Aux — V-SP 
Structural change : X x - X 2 -X 3 - A' 4 -» X A - X 2 + be + 

en-X y ~by + X x (34) 



T±- obligatory: 



Structural analysi 

Structural change: 
14- T£ p - optional: 



X — l\ — Prt — Pronoun 
X- \\~Comp-yP 
X x — A 2 -^3~-^~*^i~ 



(86) 
(92) 
X 2 — X 4 — A 3 



Structural analysis: X— l \ — Prt — MP 
Structural change: same as 13 
Number Transformation — obligatory 
Structural analysis: X - C - Y 

S in the context NP sing ~ 



(85) 



Structural change: C 
T„ ot - optional 



O in other contexts 
past in any context 



(29i) 



Structural analysis: 



(37) 



NP-C- I". .. 
\P-C + Af- . 
\P-C + ha\e- ... 
SP-C + he-.. 
Structural change: X l - X 2 — A' 3 ->X X —X 2 -- nt — X y 

T A — optional: 

Structural analysis: same as 16 (cf. (45)-(47j) 

Structural change: X x — X 2 — A'j -* X x — X 2 *. - * — A 3 
T q — optional: 

Structural analysis: same as 16 (ct (41>-(43)) 

Structural change: X x — X 2 - X 3 -* X 2 - X\ — A r 3 
T w — optional and conditional on T„: 
T wl : Structural analysis: X — AP — > (A or } may be null) 
Structural change: same as 18 (60i) 
T w2 : Structural analysis: AP-X (60ii) 



Structural change: X x - A' 2 -*m/j ^ X l -X 2 



where wh f animate noun 
\\h + animate noun 



-* who 
->■ what 



(cf. p. 69, fn. 2) 



APPENDIX II 



113 



20. Auxiliary Transformation — obligatory: 

Structural analysis: X — Af — v — Y (where Afis any C or is 

en or ing; 1 is any M or 
V> or have ot be) (29 li) 

Structural change: A', - A%-A' 3 -A r 4 — X\-X 3 -X 2 * -AT 4 

21. Boundary Transformation obligatory: 
Structural analysis: A' — >' (where X$v or X^M/l (29m) 
Structural change: X x — X 2 -* X l — # X 2 

21. do Transformation obligatory: 

Structural analysis: # — Af (40) 
Structural change: X t — X 2 ~*X 1 ~ do + X 2 



Generalized Transformations: 

22. Conjunction (26) 
Structural analysis: of Z — X— W 

ofS 2 : Z-X - H 
where A' is a minimal element (e.g., A7\ VP, etc.) and 
Z t W are segments of terminal strings. 
Structural change: (A' 1 - X 2 - X 3 ; X 4 - X 5 - X 6 ) -» X x - 

X 2 +tmd-\ X s -X 3 

23. T so : (48)-(50) 
Structural analysis: of S t : same as 16 

of S 2 : same as 16 
Structural change: (A^ — X 2 — X 3 : X A — X 5 — X 6 ) - * 

X y —X 2 -X i — and —so — X 5 — X A 
T so is actually a compound with the conjunction transfor- 
mation. 

24. Nominahzing Transformation T to : (p. 72, fn. 3) 
Structural analysis: of S l : AT — VP 

o(S 2 :X-NP- Y(Xor K may be null) 
Structural change: (AT, — X 2 \ X 3 - X 4 - X s ) -* A r 3 - to + 

X 2 -X 5 

25. Nominalizing Transformation T inff : (p. 72, fn. 3) 
Same as 24, with ing in place of to in Structural change 



114 



SYNTACTIC STRUCTURES 



26. Nominalizing Transformation T Aaj : (71) 
Structural analysis : of S y : T— N — is — A 

of S 2 - same as 24 
Structural change: (X t — X 2 — X 3 — X 4 ; X s - X 6 — X~) -» 

x 5 - A- t + jr 4 + * 2 - x n 

Morphophonemic Structure: 

Rules (19); (45); p. 58, fn. 8; p. 69, fn. 2; etc. 

We thus have three sets of rules, as in (35): rules of phrase 
structure, transformational rules (including simple and generalized 
transformations), and morphophonemic rules. Order of rules is 
essential, and in a properly formulated grammar it would be indi- 
cated in all three sections, along with a distinction between optional 
and obligatory rules and, at least in the transformational part, a 
statement of conditional dependence among rules. The result of 
applying all of these rules is an extended derivation (such as (13)- 
(30)- (3 1 )) terminating in a string of phonemes of the language under 
analysis, i.e., a grammatical utterance. This formulation of the 
transformational rules is meant only to be suggestive. We have not 
developed the machinery for presenting all these rules in a proper 
and uniform manner. See the references cited in fn. 8. p. 44, for a 
more detailed development and application of transformational 
analysis. 



BIBLIOGRAPHY 



1. Y. Bar-Hillel, "Logical syntax and semantics," Language 30.230-7 (1954). 

2. B. Bloch, "A set of postulates for phonemic analysis," Language 24.3-46 
(1948). 

3. L. Bloomfield, Language (New York, 1933j. 

4. N. Chomsky, The logical structure of linguistic theory (mimeographed). 

5. , "Semantic considerations in grammar,** Monograph no. 8, pp. 141-53 

(1955), The Institute of Languages and Linguistics, Georgetown Uni\ersity 

6. , ''Systems of syntactic analysis," Journal of Symbolic Logic 18.242-56 

(1953), 

7. , "Three models for the description of language," I.R.E Transactions on 

Information Theory, vol. 1T-2, Proceedings of the symposium on information 
theory, Sept., 1956. 

8. , Transformational analysis, Ph. D. Dissertation, University of Pennsyl- 
vania (1955). 

9. , with M. Halle and F. Lukoff, "On accent and juncture in English," 

For Roman Jakobson (VGravenhage, 1956). 

10. M. Fowler, Review of Z. S. Hams, Methods in structural linguistics, in 
Language 28.504-9 (1952). 

11. N. Goodman, The structure of appearance (Cambridge, 1951). 

12. -- , "On likeness of meaning," Analysis, vol. 10, no. 1 (1949). 

13. — , "On some differences about meaning," Analysis, vol. 13, no. 4 (1953). 
Both 12 and 13 are reprinted, with an additional note, in Philosophy and 
Analysis. M. Macdonald, editor (New York, 1954). 

14. M. Halle. "The strategy of phonemics," Linguistics Today, Word 10.197-209 
(1954) 

15. Z. S. Hams, "Discourse analysis," Language 28.1-30 (1952j. 

1 6. , "Distributional structure," Linguistics Today, Word 1 0. 1 46- 62 ( 1 954). 

17. , "From phoneme to morpheme," Language 31.190-222 (1955). 

18. - -, Methods in structural linguistics (Chicago, 1951). 

19. , "Cooccurrence and transformations in linguistic structure," Language 

33. 283-340 (1957). 

20. F. W Harwood. "Axiomatic syntax; the construction and evaluation of a 
syntactic calculus," Language 31 409- 14 (1955). 

21. L. Hjclmslev, Prolegomena to a theory of language Xfemotr ?, Indiana 
Publications in Anthropology and Linguistics (Baltimore, 1953) 

22. C. F. Hockett, "A formal statement of morphemic analysis," Studies in 
Linguistics 10.27-39 (1952). 



116 



BIBLIOGRAPHY 



23. -, A manual of phonology Memoir II, Indiana University Publications 
in Anthropology and Linguistics (Baltimore, J955). 

24. - -, "Problems of morphemic analysis," Language 23 321-43 (1947) 

25. , "Two models of grammatical description," Linguistics Today, Hord 

10.210-33 (1954). 

26. , "Two fundamental problems m phonemics," Studies in Linguistics 

7 33 (1949) 

27. R Jakobson, "Beilrag zur allgemeinen Kasuslchrc," Traiaux du Cercle 
Linguistique de Prague 6.240-88 0936). 

28. , "The phonemic and grammatical aspects of language and their 

interrelation,'" Proceedings of the Sixth International Congress of Linguists 5 -18 
(Paris, 1948). 

29. O. Jesperscn, Language (New York, W22). 

30. F Lounsbury, "A semantic analysis of the Pawnee kinship usage," Language 
32.158 94 (1956). 

31. B. Mandelbrot, '"Simple games of strategy occurring in communication 
through natural languages," Transactions of the I.R.E., Professional Group on 
Information Theory, PGIT-3, 124-37 (1954). 

?2 - , "Structure formelle des textes et communication deux etudes," 
Word 10.1-27 (1954) 

^3. h. Nida, A synopsis of English syntax (South Pasadena, 1951). 

34. K. L. Pike, "Grammatical prerequisites to phonemic analysis." Word 

3.155 72 (1947). 

35 , "More on grammatical prerequisites," liord 8.106-21 (1952). 

36 W. V. Quine, From a logical point of we* (Cambridge, 1953). 

37 C. E. Shannon and W. Weaver, The mathematical theory of commun atim 
(Urbana, 1949). 

38. H. A. Simon. "On a class of skew distribution functions," obmetrika 
42 425-40(1955) 

39. R, S. Wells, "Immediate constituents," Language 23.81-117 (1947>. 

SOME ADDITIONAL BIBLIOGRAPHY ON 
GENERATIVE GRAMMAR 

Bar-Hiilel, Y., C. Gaifman, E. Shamir. "On categonal and phrase-structure 

grammars". The Bulletin ojthe Research Council of Israel, vol 9E, 1 16 1 1960) 
Bar-Hiilel, Y., M. Perlcs, E. Shamir, On fornal properties of simple phrase 

structure grammars. Technical report no. 4, U.S. Office of Naval Research, 

Information Systems Branch (July, 1960). 
Chomsky, N., "A transformational approach to syntax", Proceedings of the 

1958 University of Texas Symposium on Syntax (to appear-. 
, "On certain formal properties of grammars". Information and Control 

2.133-67 (1959). 

_ — ( * a note on phrase structure grammars", Information and Control 2.393-5 
(1959.. 

, "On the notion 'rule of grammar'". Proceedings of the symposium on the 

structure of language and its mathematical aspects, American Mathematical 
Society, vol. 12.6-24 (1961). 



BIBLIOGRAPHY 



117 



, "Some methodolog cal rcr. arks on generative grammar". Word 17. 

219-239 (1961). 

, "Explanatory models in Linguistics", Proceedings of the I960 International 

Congress on Logic, Methodology and Philosophy of Science, P. Suppes, editor, 
(to appear). 

Chomsky, N., and M Halle, The sound pattern of English (to appear). 
Gleiiman, L., "Pronominals and Stress in English Conjunctions" (to appear in 
Language Learning). 

— , "Causatne and Instrumental Structures in English" (to appear). 
Halle, M., The sound pattern of Russian ('s-Gravenhage, 1959). 

, "Questions of hngui*l:cs", \uovo Cimento 13.494- 517 (1959) 

, "On ihe role of simplicity m linguistic descriptions 1 . Proceedings of the 

symposium on the structure of language and its mathematical aspects, American 

Mathematical Society, vol 12 89-94 (1961). 
Halle, M., and K Stevens, '"Analysis by synthesis", in L E. Woods and W. 

Walhen-Dunn, cds , Proceedings of the Seminar on Speech Compression and 

Processing, Dec 1959, AK RC-TR-'59 198, vol II paper D-7 
Householder, K, "On linguistic primes", Word 15 (1959). 
Kljma, E S., "Negation in English" <to appear). 

Lees, R. B., The grammar of Englhh nominahzatiom, Supplement to Interna- 
tional Journal of American Linguistics 26 (I960). 

— , "A multiply ambiguous adjectival construction in English", Language 
36 207 221 (1960). 

--, "The English comparative construction", Word 17.171-185 (1961) 
-, "O pereformulirovann transformactonnyx grammatik" (to appear in 
Voprosy Jajykoznamja 10 % 6 (1961" 

-, "On the So-called *Substilution-in -Frames* Technique" (to appear in 
General Linguistics). 

— , "On the Constituent-Structure of English Noun-Phrases" (to appear in 
American Speech). 

- -, "Some Neglected Aspects of Parsing" (to appear in Language Learning) 

— - , "The Grammat cal Basis of Some Semantic Notions" do appear in 
Proceedings of the Eleventh Annual Round Table Conference, Georgetown 
I Jnivcrsity Monograph Series). 

— , "On the Testability of Linguittic Pred:cates" (to appear in Voprosy 
Jazykoznanija II (1962)1. 

- , Turkish Phonology (to be published b> Cralic and iltaic Series, Indiana 
University (1962 ) 

— , "A Compact Analysis for the Turkish Personal Morphemes" (to appear 
in American Studies in Altuic Linguistics, Indiana L'nivcrsit> , (1962) . 

Matthews, G. H.. "On Tone m Crow', IJ4L 25 135 6 (1959>. 
, "A grammar of Hidatsa" (to appear). 

— - -, "Analysis by synthesis of sentences of natural languages", Proceedings of 
the International Conference on Mechanical Translation and App'.ied Lin- 
guistics, National Physical Laboratory, Teddington. 

Smith, C. S., "A class of complex modifiers in English" (to appear in Language). 
Stockwell, R. P., "The place of intonation in a generative grammar of English", 
Language 36 (I960).